From patchwork Thu Jan 9 02:44:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ethan Chen X-Patchwork-Id: 13931896 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 471A3E7719A for ; Thu, 9 Jan 2025 03:13:23 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tVixr-0008Oe-9b; Wed, 08 Jan 2025 22:12:07 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tVixp-0008O3-V3; Wed, 08 Jan 2025 22:12:05 -0500 Received: from 60-248-80-70.hinet-ip.hinet.net ([60.248.80.70] helo=Atcsqr.andestech.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tVixo-0001FN-7F; Wed, 08 Jan 2025 22:12:05 -0500 Received: from Atcsqr.andestech.com (localhost [127.0.0.2] (may be forged)) by Atcsqr.andestech.com with ESMTP id 5092jPxI028421; Thu, 9 Jan 2025 10:45:25 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from mail.andestech.com (ATCPCS31.andestech.com [10.0.1.89]) by Atcsqr.andestech.com with ESMTPS id 5092ixem027682 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 9 Jan 2025 10:44:59 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from atcpcw16.andestech.com (10.0.1.106) by ATCPCS31.andestech.com (10.0.1.89) with Microsoft SMTP Server (TLS) id 14.3.498.0; Thu, 9 Jan 2025 10:44:59 +0800 To: CC: , , , , , , , , , , , , Ethan Chen Subject: [PATCH v9 1/8] hw/core: Add config stream Date: Thu, 9 Jan 2025 10:44:34 +0800 Message-ID: <20250109024441.3283671-2-ethan84@andestech.com> X-Mailer: git-send-email 2.42.0.345.gaab89be2eb.dirty In-Reply-To: <20250109024441.3283671-1-ethan84@andestech.com> References: <20250109024441.3283671-1-ethan84@andestech.com> MIME-Version: 1.0 X-Originating-IP: [10.0.1.106] X-DKIM-Results: atcpcs31.andestech.com; dkim=none; X-DNSRBL: X-MAIL: Atcsqr.andestech.com 5092jPxI028421 Received-SPF: pass client-ip=60.248.80.70; envelope-from=ethan84@andestech.com; helo=Atcsqr.andestech.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, RDNS_DYNAMIC=0.982, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TVD_RCVD_IP=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Ethan Chen X-Patchwork-Original-From: Ethan Chen via From: Ethan Chen Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Make other device can use /hw/core/stream.c by select this config. Reviewed-by: Alistair Francis Signed-off-by: Ethan Chen --- hw/Kconfig | 1 + hw/core/Kconfig | 3 +++ hw/core/meson.build | 2 +- 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/hw/Kconfig b/hw/Kconfig index 1b4e9bb07f..cb12b8c11b 100644 --- a/hw/Kconfig +++ b/hw/Kconfig @@ -77,6 +77,7 @@ config XILINX config XILINX_AXI bool select PTIMER # for hw/dma/xilinx_axidma.c + select STREAM config XLNX_ZYNQMP bool diff --git a/hw/core/Kconfig b/hw/core/Kconfig index d1bdf765ee..dffa9a1b01 100644 --- a/hw/core/Kconfig +++ b/hw/core/Kconfig @@ -38,3 +38,6 @@ config SPLIT_IRQ config EIF bool depends on LIBCBOR && GNUTLS + +config STREAM + bool diff --git a/hw/core/meson.build b/hw/core/meson.build index ce9dfa3f4b..2871639301 100644 --- a/hw/core/meson.build +++ b/hw/core/meson.build @@ -22,7 +22,7 @@ system_ss.add(when: 'CONFIG_PLATFORM_BUS', if_true: files('platform-bus.c')) system_ss.add(when: 'CONFIG_PTIMER', if_true: files('ptimer.c')) system_ss.add(when: 'CONFIG_REGISTER', if_true: files('register.c')) system_ss.add(when: 'CONFIG_SPLIT_IRQ', if_true: files('split-irq.c')) -system_ss.add(when: 'CONFIG_XILINX_AXI', if_true: files('stream.c')) +system_ss.add(when: 'CONFIG_STREAM', if_true: files('stream.c')) system_ss.add(when: 'CONFIG_PLATFORM_BUS', if_true: files('sysbus-fdt.c')) system_ss.add(when: 'CONFIG_EIF', if_true: [files('eif.c'), zlib, libcbor, gnutls]) From patchwork Thu Jan 9 02:44:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ethan Chen X-Patchwork-Id: 13931894 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 37BCAE77188 for ; Thu, 9 Jan 2025 03:12:53 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tVixy-0008Pb-VO; Wed, 08 Jan 2025 22:12:14 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tVixx-0008PJ-0U; Wed, 08 Jan 2025 22:12:13 -0500 Received: from 60-248-80-70.hinet-ip.hinet.net ([60.248.80.70] helo=Atcsqr.andestech.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tVixu-0001Fm-Pu; Wed, 08 Jan 2025 22:12:12 -0500 Received: from Atcsqr.andestech.com (localhost [127.0.0.2] (may be forged)) by Atcsqr.andestech.com with ESMTP id 5092jR77028466; Thu, 9 Jan 2025 10:45:27 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from mail.andestech.com (ATCPCS31.andestech.com [10.0.1.89]) by Atcsqr.andestech.com with ESMTPS id 5092j0oe027722 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 9 Jan 2025 10:45:00 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from atcpcw16.andestech.com (10.0.1.106) by ATCPCS31.andestech.com (10.0.1.89) with Microsoft SMTP Server (TLS) id 14.3.498.0; Thu, 9 Jan 2025 10:44:59 +0800 To: CC: , , , , , , , , , , , , Ethan Chen Subject: [PATCH v9 2/8] memory: Introduce memory region fetch operation Date: Thu, 9 Jan 2025 10:44:35 +0800 Message-ID: <20250109024441.3283671-3-ethan84@andestech.com> X-Mailer: git-send-email 2.42.0.345.gaab89be2eb.dirty In-Reply-To: <20250109024441.3283671-1-ethan84@andestech.com> References: <20250109024441.3283671-1-ethan84@andestech.com> MIME-Version: 1.0 X-Originating-IP: [10.0.1.106] X-DKIM-Results: atcpcs31.andestech.com; dkim=none; X-DNSRBL: X-MAIL: Atcsqr.andestech.com 5092jR77028466 Received-SPF: pass client-ip=60.248.80.70; envelope-from=ethan84@andestech.com; helo=Atcsqr.andestech.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, RDNS_DYNAMIC=0.982, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TVD_RCVD_IP=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Ethan Chen X-Patchwork-Original-From: Ethan Chen via From: Ethan Chen Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Allow memory regions to have different behaviors for read and fetch operations. For example, the RISC-V IOPMP could raise an interrupt when the CPU tries to fetch from a non-executable region. If the fetch operation for a memory region is not implemented, the read operation will still be used for fetch operations. Signed-off-by: Ethan Chen --- accel/tcg/cputlb.c | 9 +++- include/exec/memory.h | 27 +++++++++++ system/memory.c | 104 ++++++++++++++++++++++++++++++++++++++++++ system/trace-events | 2 + 4 files changed, 140 insertions(+), 2 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index b4ccf0cdcb..71c16a1ac1 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1947,8 +1947,13 @@ static uint64_t int_ld_mmio_beN(CPUState *cpu, CPUTLBEntryFull *full, this_size = 1 << this_mop; this_mop |= MO_BE; - r = memory_region_dispatch_read(mr, mr_offset, &val, - this_mop, full->attrs); + if (type == MMU_INST_FETCH) { + r = memory_region_dispatch_fetch(mr, mr_offset, &val, + this_mop, full->attrs); + } else { + r = memory_region_dispatch_read(mr, mr_offset, &val, + this_mop, full->attrs); + } if (unlikely(r != MEMTX_OK)) { io_failed(cpu, full, addr, this_size, type, mmu_idx, r, ra); } diff --git a/include/exec/memory.h b/include/exec/memory.h index 9458e2801d..ed15d99c3c 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -273,6 +273,11 @@ struct MemoryRegionOps { hwaddr addr, uint64_t data, unsigned size); + /* Fetch from the memory region. @addr is relative to @mr; @size is + * in bytes. */ + uint64_t (*fetch)(void *opaque, + hwaddr addr, + unsigned size); MemTxResult (*read_with_attrs)(void *opaque, hwaddr addr, @@ -284,6 +289,11 @@ struct MemoryRegionOps { uint64_t data, unsigned size, MemTxAttrs attrs); + MemTxResult (*fetch_with_attrs)(void *opaque, + hwaddr addr, + uint64_t *data, + unsigned size, + MemTxAttrs attrs); enum device_endian endianness; /* Guest-visible constraints: */ @@ -2605,6 +2615,23 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr, MemOp op, MemTxAttrs attrs); + +/** + * memory_region_dispatch_fetch: perform a fetch directly to the specified + * MemoryRegion. + * + * @mr: #MemoryRegion to access + * @addr: address within that region + * @pval: pointer to uint64_t which the data is written to + * @op: size, sign, and endianness of the memory operation + * @attrs: memory transaction attributes to use for the access + */ +MemTxResult memory_region_dispatch_fetch(MemoryRegion *mr, + hwaddr addr, + uint64_t *pval, + MemOp op, + MemTxAttrs attrs); + /** * address_space_init: initializes an address space * diff --git a/system/memory.c b/system/memory.c index 78e17e0efa..f57a86ce0e 100644 --- a/system/memory.c +++ b/system/memory.c @@ -477,6 +477,51 @@ static MemTxResult memory_region_read_with_attrs_accessor(MemoryRegion *mr, return r; } +static MemTxResult memory_region_fetch_accessor(MemoryRegion *mr, + hwaddr addr, + uint64_t *value, + unsigned size, + signed shift, + uint64_t mask, + MemTxAttrs attrs) +{ + uint64_t tmp; + + tmp = mr->ops->fetch(mr->opaque, addr, size); + if (mr->subpage) { + trace_memory_region_subpage_fetch(get_cpu_index(), mr, addr, tmp, size); + } else if (trace_event_get_state_backends(TRACE_MEMORY_REGION_OPS_FETCH)) { + hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); + trace_memory_region_ops_fetch(get_cpu_index(), mr, abs_addr, tmp, size, + memory_region_name(mr)); + } + memory_region_shift_read_access(value, shift, mask, tmp); + return MEMTX_OK; +} + +static MemTxResult memory_region_fetch_with_attrs_accessor(MemoryRegion *mr, + hwaddr addr, + uint64_t *value, + unsigned size, + signed shift, + uint64_t mask, + MemTxAttrs attrs) +{ + uint64_t tmp = 0; + MemTxResult r; + + r = mr->ops->fetch_with_attrs(mr->opaque, addr, &tmp, size, attrs); + if (mr->subpage) { + trace_memory_region_subpage_fetch(get_cpu_index(), mr, addr, tmp, size); + } else if (trace_event_get_state_backends(TRACE_MEMORY_REGION_OPS_FETCH)) { + hwaddr abs_addr = memory_region_to_absolute_addr(mr, addr); + trace_memory_region_ops_fetch(get_cpu_index(), mr, abs_addr, tmp, size, + memory_region_name(mr)); + } + memory_region_shift_read_access(value, shift, mask, tmp); + return r; +} + static MemTxResult memory_region_write_accessor(MemoryRegion *mr, hwaddr addr, uint64_t *value, @@ -1493,6 +1538,65 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr, return r; } +static MemTxResult memory_region_dispatch_fetch1(MemoryRegion *mr, + hwaddr addr, + uint64_t *pval, + unsigned size, + MemTxAttrs attrs) +{ + *pval = 0; + + if (mr->ops->fetch) { + return access_with_adjusted_size(addr, pval, size, + mr->ops->impl.min_access_size, + mr->ops->impl.max_access_size, + memory_region_fetch_accessor, + mr, attrs); + } else if (mr->ops->fetch_with_attrs) { + return access_with_adjusted_size(addr, pval, size, + mr->ops->impl.min_access_size, + mr->ops->impl.max_access_size, + memory_region_fetch_with_attrs_accessor, + mr, attrs); + } else if (mr->ops->read) { + return access_with_adjusted_size(addr, pval, size, + mr->ops->impl.min_access_size, + mr->ops->impl.max_access_size, + memory_region_read_accessor, + mr, attrs); + } else { + return access_with_adjusted_size(addr, pval, size, + mr->ops->impl.min_access_size, + mr->ops->impl.max_access_size, + memory_region_read_with_attrs_accessor, + mr, attrs); + } +} + +MemTxResult memory_region_dispatch_fetch(MemoryRegion *mr, + hwaddr addr, + uint64_t *pval, + MemOp op, + MemTxAttrs attrs) +{ + unsigned size = memop_size(op); + MemTxResult r; + + if (mr->alias) { + return memory_region_dispatch_fetch(mr->alias, + mr->alias_offset + addr, + pval, op, attrs); + } + if (!memory_region_access_valid(mr, addr, size, false, attrs)) { + *pval = unassigned_mem_read(mr, addr, size); + return MEMTX_DECODE_ERROR; + } + + r = memory_region_dispatch_fetch1(mr, addr, pval, size, attrs); + adjust_endianness(mr, pval, op); + return r; +} + /* Return true if an eventfd was signalled */ static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr, hwaddr addr, diff --git a/system/trace-events b/system/trace-events index 5bbc3fbffa..4e78bb515b 100644 --- a/system/trace-events +++ b/system/trace-events @@ -18,8 +18,10 @@ cpu_out(unsigned int addr, char size, unsigned int val) "addr 0x%x(%c) value %u" # memory.c memory_region_ops_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size, const char *name) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u name '%s'" memory_region_ops_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size, const char *name) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u name '%s'" +memory_region_ops_fetch(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size, const char *name) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u name '%s'" memory_region_subpage_read(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_subpage_write(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u" +memory_region_subpage_fetch(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_ram_device_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_ram_device_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u" memory_region_sync_dirty(const char *mr, const char *listener, int global) "mr '%s' listener '%s' synced (global=%d)" From patchwork Thu Jan 9 02:44:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ethan Chen X-Patchwork-Id: 13931899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 60D29E77188 for ; Thu, 9 Jan 2025 03:14:31 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tViyA-0008Rj-4p; Wed, 08 Jan 2025 22:12:26 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViy7-0008RD-S4; Wed, 08 Jan 2025 22:12:23 -0500 Received: from 60-248-80-70.hinet-ip.hinet.net ([60.248.80.70] helo=Atcsqr.andestech.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViy6-0001HK-1b; Wed, 08 Jan 2025 22:12:23 -0500 Received: from Atcsqr.andestech.com (localhost [127.0.0.2] (may be forged)) by Atcsqr.andestech.com with ESMTP id 5092jXSf028543; Thu, 9 Jan 2025 10:45:33 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from mail.andestech.com (ATCPCS31.andestech.com [10.0.1.89]) by Atcsqr.andestech.com with ESMTPS id 5092j0il027800 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 9 Jan 2025 10:45:00 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from atcpcw16.andestech.com (10.0.1.106) by ATCPCS31.andestech.com (10.0.1.89) with Microsoft SMTP Server (TLS) id 14.3.498.0; Thu, 9 Jan 2025 10:45:00 +0800 To: CC: , , , , , , , , , , , , Ethan Chen Subject: [PATCH v9 3/8] system/physmem: Support IOMMU granularity smaller than TARGET_PAGE size Date: Thu, 9 Jan 2025 10:44:36 +0800 Message-ID: <20250109024441.3283671-4-ethan84@andestech.com> X-Mailer: git-send-email 2.42.0.345.gaab89be2eb.dirty In-Reply-To: <20250109024441.3283671-1-ethan84@andestech.com> References: <20250109024441.3283671-1-ethan84@andestech.com> MIME-Version: 1.0 X-Originating-IP: [10.0.1.106] X-DKIM-Results: atcpcs31.andestech.com; dkim=none; X-DNSRBL: X-MAIL: Atcsqr.andestech.com 5092jXSf028543 Received-SPF: pass client-ip=60.248.80.70; envelope-from=ethan84@andestech.com; helo=Atcsqr.andestech.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, RDNS_DYNAMIC=0.982, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TVD_RCVD_IP=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Ethan Chen X-Patchwork-Original-From: Ethan Chen via From: Ethan Chen Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org If the IOMMU granularity is smaller than the TARGET_PAGE size, there may be multiple entries within the same page. To obtain the correct result, pass the original address to the IOMMU. Similar to the RISC-V PMP solution, the TLB_INVALID_MASK will be set when there are multiple entries in the same page, ensuring that the IOMMU is checked on every access. Signed-off-by: Ethan Chen Acked-by: Alistair Francis --- accel/tcg/cputlb.c | 20 ++++++++++++++++---- system/physmem.c | 4 ++++ 2 files changed, 20 insertions(+), 4 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index 71c16a1ac1..ed55f02eab 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1063,8 +1063,23 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, prot = full->prot; asidx = cpu_asidx_from_attrs(cpu, full->attrs); - section = address_space_translate_for_iotlb(cpu, asidx, paddr_page, + section = address_space_translate_for_iotlb(cpu, asidx, full->phys_addr, &xlat, &sz, full->attrs, &prot); + /* Update page size */ + full->lg_page_size = ctz64(sz); + if (full->lg_page_size > TARGET_PAGE_BITS) { + full->lg_page_size = TARGET_PAGE_BITS; + } else { + sz = TARGET_PAGE_SIZE; + } + + is_ram = memory_region_is_ram(section->mr); + is_romd = memory_region_is_romd(section->mr); + /* If the translated mr is ram/rom, make xlat align the TARGET_PAGE */ + if (is_ram || is_romd) { + xlat &= TARGET_PAGE_MASK; + } + assert(sz >= TARGET_PAGE_SIZE); tlb_debug("vaddr=%016" VADDR_PRIx " paddr=0x" HWADDR_FMT_plx @@ -1077,9 +1092,6 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx, read_flags |= TLB_INVALID_MASK; } - is_ram = memory_region_is_ram(section->mr); - is_romd = memory_region_is_romd(section->mr); - if (is_ram || is_romd) { /* RAM and ROMD both have associated host memory. */ addend = (uintptr_t)memory_region_get_ram_ptr(section->mr) + xlat; diff --git a/system/physmem.c b/system/physmem.c index c76503aea8..d64543b413 100644 --- a/system/physmem.c +++ b/system/physmem.c @@ -702,6 +702,10 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr, iotlb = imrc->translate(iommu_mr, addr, IOMMU_NONE, iommu_idx); addr = ((iotlb.translated_addr & ~iotlb.addr_mask) | (addr & iotlb.addr_mask)); + /* Update size */ + if (iotlb.addr_mask != -1 && *plen > iotlb.addr_mask + 1) { + *plen = iotlb.addr_mask + 1; + } /* Update the caller's prot bits to remove permissions the IOMMU * is giving us a failure response for. If we get down to no * permissions left at all we can give up now. From patchwork Thu Jan 9 02:44:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ethan Chen X-Patchwork-Id: 13931901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 90BCDE77199 for ; Thu, 9 Jan 2025 03:14:32 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tViyD-0008ST-NK; Wed, 08 Jan 2025 22:12:29 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyC-0008S5-5W; Wed, 08 Jan 2025 22:12:28 -0500 Received: from 60-248-80-70.hinet-ip.hinet.net ([60.248.80.70] helo=Atcsqr.andestech.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyA-0001Ho-E3; Wed, 08 Jan 2025 22:12:27 -0500 Received: from Atcsqr.andestech.com (localhost [127.0.0.2] (may be forged)) by Atcsqr.andestech.com with ESMTP id 5092jXYk028556; Thu, 9 Jan 2025 10:45:33 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from mail.andestech.com (ATCPCS31.andestech.com [10.0.1.89]) by Atcsqr.andestech.com with ESMTPS id 5092j0DH027816 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 9 Jan 2025 10:45:00 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from atcpcw16.andestech.com (10.0.1.106) by ATCPCS31.andestech.com (10.0.1.89) with Microsoft SMTP Server (TLS) id 14.3.498.0; Thu, 9 Jan 2025 10:45:00 +0800 To: CC: , , , , , , , , , , , , Ethan Chen Subject: [PATCH v9 4/8] target/riscv: Add support for IOPMP Date: Thu, 9 Jan 2025 10:44:37 +0800 Message-ID: <20250109024441.3283671-5-ethan84@andestech.com> X-Mailer: git-send-email 2.42.0.345.gaab89be2eb.dirty In-Reply-To: <20250109024441.3283671-1-ethan84@andestech.com> References: <20250109024441.3283671-1-ethan84@andestech.com> MIME-Version: 1.0 X-Originating-IP: [10.0.1.106] X-DKIM-Results: atcpcs31.andestech.com; dkim=none; X-DNSRBL: X-MAIL: Atcsqr.andestech.com 5092jXYk028556 Received-SPF: pass client-ip=60.248.80.70; envelope-from=ethan84@andestech.com; helo=Atcsqr.andestech.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, RDNS_DYNAMIC=0.982, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TVD_RCVD_IP=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Ethan Chen X-Patchwork-Original-From: Ethan Chen via From: Ethan Chen Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Ethan Chen Reviewed-by: Alistair Francis --- target/riscv/cpu.c | 3 +++ target/riscv/cpu_cfg.h | 2 ++ target/riscv/cpu_helper.c | 18 +++++++++++++++--- 3 files changed, 20 insertions(+), 3 deletions(-) diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c index b8d5120106..212e522ed7 100644 --- a/target/riscv/cpu.c +++ b/target/riscv/cpu.c @@ -2798,6 +2798,9 @@ static const Property riscv_cpu_properties[] = { * it with -x and default to 'false'. */ DEFINE_PROP_BOOL("x-misa-w", RISCVCPU, cfg.misa_w, false), + + DEFINE_PROP_BOOL("iopmp", RISCVCPU, cfg.iopmp, false), + DEFINE_PROP_UINT32("iopmp_rrid", RISCVCPU, cfg.iopmp_rrid, 0), }; #if defined(TARGET_RISCV64) diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h index a1457ab4f4..c5d5e1a77d 100644 --- a/target/riscv/cpu_cfg.h +++ b/target/riscv/cpu_cfg.h @@ -175,6 +175,8 @@ struct RISCVCPUConfig { bool pmp; bool debug; bool misa_w; + bool iopmp; + uint32_t iopmp_rrid; bool short_isa_string; diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c index f62b21e182..926ae38684 100644 --- a/target/riscv/cpu_helper.c +++ b/target/riscv/cpu_helper.c @@ -1599,9 +1599,21 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size, } if (ret == TRANSLATE_SUCCESS) { - tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1), - prot, mmu_idx, tlb_size); - return true; + if (cpu->cfg.iopmp) { + /* + * Do not align address on early stage because IOPMP needs origin + * address for permission check. + */ + tlb_set_page_with_attrs(cs, address, pa, + (MemTxAttrs) + { + .requester_id = cpu->cfg.iopmp_rrid, + }, + prot, mmu_idx, tlb_size); + } else { + tlb_set_page(cs, address & ~(tlb_size - 1), pa & ~(tlb_size - 1), + prot, mmu_idx, tlb_size); + } } else if (probe) { return false; } else { From patchwork Thu Jan 9 02:44:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ethan Chen X-Patchwork-Id: 13931898 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B2849E77188 for ; Thu, 9 Jan 2025 03:14:03 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tViya-0000A9-SG; Wed, 08 Jan 2025 22:12:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyX-000088-I1; Wed, 08 Jan 2025 22:12:49 -0500 Received: from 60-248-80-70.hinet-ip.hinet.net ([60.248.80.70] helo=Atcsqr.andestech.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyV-0001Jd-Hx; Wed, 08 Jan 2025 22:12:49 -0500 Received: from Atcsqr.andestech.com (localhost [127.0.0.2] (may be forged)) by Atcsqr.andestech.com with ESMTP id 5092je0r028724; Thu, 9 Jan 2025 10:45:40 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from mail.andestech.com (ATCPCS31.andestech.com [10.0.1.89]) by Atcsqr.andestech.com with ESMTPS id 5092j1BS027822 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 9 Jan 2025 10:45:01 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from atcpcw16.andestech.com (10.0.1.106) by ATCPCS31.andestech.com (10.0.1.89) with Microsoft SMTP Server (TLS) id 14.3.498.0; Thu, 9 Jan 2025 10:45:01 +0800 To: CC: , , , , , , , , , , , , Ethan Chen Subject: [PATCH v9 5/8] hw/misc/riscv_iopmp_txn_info: Add struct for transaction infomation Date: Thu, 9 Jan 2025 10:44:38 +0800 Message-ID: <20250109024441.3283671-6-ethan84@andestech.com> X-Mailer: git-send-email 2.42.0.345.gaab89be2eb.dirty In-Reply-To: <20250109024441.3283671-1-ethan84@andestech.com> References: <20250109024441.3283671-1-ethan84@andestech.com> MIME-Version: 1.0 X-Originating-IP: [10.0.1.106] X-DKIM-Results: atcpcs31.andestech.com; dkim=none; X-DNSRBL: X-MAIL: Atcsqr.andestech.com 5092je0r028724 Received-SPF: pass client-ip=60.248.80.70; envelope-from=ethan84@andestech.com; helo=Atcsqr.andestech.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, RDNS_DYNAMIC=0.982, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TVD_RCVD_IP=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Ethan Chen X-Patchwork-Original-From: Ethan Chen via From: Ethan Chen Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org The entire valid transaction must fit within a single IOPMP entry. However, during IOMMU translation, the transaction size is not available. This structure defines the transaction information required by the IOPMP. Signed-off-by: Ethan Chen --- include/hw/misc/riscv_iopmp_txn_info.h | 38 ++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) create mode 100644 include/hw/misc/riscv_iopmp_txn_info.h diff --git a/include/hw/misc/riscv_iopmp_txn_info.h b/include/hw/misc/riscv_iopmp_txn_info.h new file mode 100644 index 0000000000..98bd26b68b --- /dev/null +++ b/include/hw/misc/riscv_iopmp_txn_info.h @@ -0,0 +1,38 @@ +/* + * QEMU RISC-V IOPMP transaction information + * + * The transaction information structure provides the complete transaction + * length to the IOPMP device + * + * Copyright (c) 2023-2025 Andes Tech. Corp. + * + * SPDX-License-Identifier: GPL-2.0-or-later + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2 or later, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see . + */ + +#ifndef RISCV_IOPMP_TXN_INFO_H +#define RISCV_IOPMP_TXN_INFO_H + +typedef struct { + /* The id of requestor */ + uint32_t rrid:16; + /* The start address of transaction */ + uint64_t start_addr; + /* The end address of transaction */ + uint64_t end_addr; + /* The stage of cascading IOPMP */ + uint32_t stage; +} riscv_iopmp_txn_info; + +#endif From patchwork Thu Jan 9 02:44:39 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ethan Chen X-Patchwork-Id: 13931897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9559BE77188 for ; Thu, 9 Jan 2025 03:13:33 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tViyq-0000gW-C6; Wed, 08 Jan 2025 22:13:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyo-0000d6-4G; Wed, 08 Jan 2025 22:13:06 -0500 Received: from 60-248-80-70.hinet-ip.hinet.net ([60.248.80.70] helo=Atcsqr.andestech.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyj-0001LY-6g; Wed, 08 Jan 2025 22:13:05 -0500 Received: from Atcsqr.andestech.com (localhost [127.0.0.2] (may be forged)) by Atcsqr.andestech.com with ESMTP id 5092jlTv028814; Thu, 9 Jan 2025 10:45:47 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from mail.andestech.com (ATCPCS31.andestech.com [10.0.1.89]) by Atcsqr.andestech.com with ESMTPS id 5092j1SA027833 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 9 Jan 2025 10:45:01 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from atcpcw16.andestech.com (10.0.1.106) by ATCPCS31.andestech.com (10.0.1.89) with Microsoft SMTP Server (TLS) id 14.3.498.0; Thu, 9 Jan 2025 10:45:01 +0800 To: CC: , , , , , , , , , , , , Ethan Chen Subject: [PATCH v9 6/8] hw/misc/riscv_iopmp: Add RISC-V IOPMP device Date: Thu, 9 Jan 2025 10:44:39 +0800 Message-ID: <20250109024441.3283671-7-ethan84@andestech.com> X-Mailer: git-send-email 2.42.0.345.gaab89be2eb.dirty In-Reply-To: <20250109024441.3283671-1-ethan84@andestech.com> References: <20250109024441.3283671-1-ethan84@andestech.com> MIME-Version: 1.0 X-Originating-IP: [10.0.1.106] X-DKIM-Results: atcpcs31.andestech.com; dkim=none; X-DNSRBL: X-MAIL: Atcsqr.andestech.com 5092jlTv028814 Received-SPF: pass client-ip=60.248.80.70; envelope-from=ethan84@andestech.com; helo=Atcsqr.andestech.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, RDNS_DYNAMIC=0.982, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TVD_RCVD_IP=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Ethan Chen X-Patchwork-Original-From: Ethan Chen via From: Ethan Chen Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Support IOPMP specification v0.9.2RC3. The specification url: https://github.com/riscv-non-isa/iopmp-spec/releases/tag/v0.9.2-RC3 The IOPMP checks whether memory access from a device or CPU is valid. This implementation uses an IOMMU to modify the address space accessed by the device. For device access with IOMMUAccessFlags specifying read or write (IOMMU_RO or IOMMU_WO), the IOPMP checks the permission in iopmp_translate. If the access is valid, the target address space is downstream_as. If the access is blocked, it will be redirected to blocked_rwx_as. For CPU access with IOMMUAccessFlags not specifying read or write (IOMMU_NONE), the IOPMP translates the access to the corresponding address space based on the permission. If the access has full permission (rwx), the target address space is downstream_as. If the access has limited permission, the target address space is blocked_ followed by the lacked permissions. The operation of a blocked region can trigger an IOPMP interrupt, a bus error, or it can respond with success and fabricated data, depending on the value of the IOPMP ERR_CFG register. Support Properties and Default Values of the IOPMP Device The following are the supported properties and their default values for the IOPMP device. If a property has no description here, please refer to the IOPMP specification for details: * mdcfg_fmt: 1 (Options: 0/1/2) * srcmd_fmt: 0 (Options: 0/1/2) * tor_en: true (Options: true/false) * sps_en: false (Options: true/false) * prient_prog: true (Options: true/false) * rrid_transl_en: false (Options: true/false) * rrid_transl_prog: false (Options: true/false) * chk_x: true (Options: true/false) * no_x: false (Options: true/false) * no_w: false (Options: true/false) * stall_en: false (Options: true/false) * peis: true (Options: true/false) * pees: true (Options: true/false) * mfr_en: true (Options: true/false) * md_entry_num: 5 (IMP: Valid only for mdcfg_fmt 1/2) * md_num: 8 (Range: 0-63) * rrid_num: 16 (Range: srcmd_fmt ≠ 2: 0-65535, srcmd_fmt = 2: 0-32) * entry_num: 48 (Range: 0-IMP. For mdcfg_fmt = 1, it is fixed as md_num * (md_entry_num + 1). Entry registers must not overlap with other registers.) * prio_entry: 65535 (Range: 0-IMP. If prio_entry > entry_num, it will be set to entry_num.) * rrid_transl: 0x0 (Range: 0-65535) * entry_offset: 0x4000 (IMP: Entry registers must not overlap with other registers.) * err_rdata: 0x0 (uint32. Specifies the value used in responses to read transactions when errors are suppressed) * msi_en: false (Options: true/false) * msidata: 12 (Range: 1-1023) * stall_violation_en: true (Options: true/false) * err_msiaddr: 0x24000000 (low-part 32-bit address) * err_msiaddrh: 0x0 (high-part 32-bit address) * msi_rrid: 0 (Range: 0-65535. Specifies the rrid used by the IOPMP to send the MSI.) Signed-off-by: Ethan Chen --- hw/misc/Kconfig | 4 + hw/misc/meson.build | 1 + hw/misc/riscv_iopmp.c | 2180 +++++++++++++++++++++++++++++++++ hw/misc/trace-events | 4 + include/hw/misc/riscv_iopmp.h | 191 +++ 5 files changed, 2380 insertions(+) create mode 100644 hw/misc/riscv_iopmp.c create mode 100644 include/hw/misc/riscv_iopmp.h diff --git a/hw/misc/Kconfig b/hw/misc/Kconfig index 8f9ce2f68c..e4ad9cf9fe 100644 --- a/hw/misc/Kconfig +++ b/hw/misc/Kconfig @@ -220,4 +220,8 @@ config IOSB config XLNX_VERSAL_TRNG bool +config RISCV_IOPMP + bool + select STREAM + source macio/Kconfig diff --git a/hw/misc/meson.build b/hw/misc/meson.build index 55f493521b..88f2bb6b88 100644 --- a/hw/misc/meson.build +++ b/hw/misc/meson.build @@ -34,6 +34,7 @@ system_ss.add(when: 'CONFIG_SIFIVE_E_PRCI', if_true: files('sifive_e_prci.c')) system_ss.add(when: 'CONFIG_SIFIVE_E_AON', if_true: files('sifive_e_aon.c')) system_ss.add(when: 'CONFIG_SIFIVE_U_OTP', if_true: files('sifive_u_otp.c')) system_ss.add(when: 'CONFIG_SIFIVE_U_PRCI', if_true: files('sifive_u_prci.c')) +specific_ss.add(when: 'CONFIG_RISCV_IOPMP', if_true: files('riscv_iopmp.c')) subdir('macio') diff --git a/hw/misc/riscv_iopmp.c b/hw/misc/riscv_iopmp.c new file mode 100644 index 0000000000..1f8b912dd5 --- /dev/null +++ b/hw/misc/riscv_iopmp.c @@ -0,0 +1,2180 @@ +/* + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection) + * + * Copyright (c) 2023-2025 Andes Tech. Corp. + * + * SPDX-License-Identifier: GPL-2.0-or-later + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2 or later, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see . + */ + +#include "qemu/osdep.h" +#include "qemu/log.h" +#include "qapi/error.h" +#include "trace.h" +#include "exec/exec-all.h" +#include "exec/address-spaces.h" +#include "hw/qdev-properties.h" +#include "hw/sysbus.h" +#include "hw/misc/riscv_iopmp.h" +#include "memory.h" +#include "hw/irq.h" +#include "hw/registerfields.h" +#include "trace.h" +#include "qemu/main-loop.h" +#include "hw/stream.h" +#include "hw/misc/riscv_iopmp_txn_info.h" + +#define TYPE_RISCV_IOPMP_IOMMU_MEMORY_REGION "riscv-iopmp-iommu-memory-region" + +REG32(VERSION, 0x00) + FIELD(VERSION, VENDOR, 0, 24) + FIELD(VERSION, SPECVER , 24, 8) +REG32(IMPLEMENTATION, 0x04) + FIELD(IMPLEMENTATION, IMPID, 0, 32) +REG32(HWCFG0, 0x08) + FIELD(HWCFG0, MDCFG_FMT, 0, 2) + FIELD(HWCFG0, SRCMD_FMT, 2, 2) + FIELD(HWCFG0, TOR_EN, 4, 1) + FIELD(HWCFG0, SPS_EN, 5, 1) + FIELD(HWCFG0, USER_CFG_EN, 6, 1) + FIELD(HWCFG0, PRIENT_PROG, 7, 1) + FIELD(HWCFG0, RRID_TRANSL_EN, 8, 1) + FIELD(HWCFG0, RRID_TRANSL_PROG, 9, 1) + FIELD(HWCFG0, CHK_X, 10, 1) + FIELD(HWCFG0, NO_X, 11, 1) + FIELD(HWCFG0, NO_W, 12, 1) + FIELD(HWCFG0, STALL_EN, 13, 1) + FIELD(HWCFG0, PEIS, 14, 1) + FIELD(HWCFG0, PEES, 15, 1) + FIELD(HWCFG0, MFR_EN, 16, 1) + FIELD(HWCFG0, MD_ENTRY_NUM, 17, 7) + FIELD(HWCFG0, MD_NUM, 24, 6) + FIELD(HWCFG0, ADDRH_EN, 30, 1) + FIELD(HWCFG0, ENABLE, 31, 1) +REG32(HWCFG1, 0x0C) + FIELD(HWCFG1, RRID_NUM, 0, 16) + FIELD(HWCFG1, ENTRY_NUM, 16, 16) +REG32(HWCFG2, 0x10) + FIELD(HWCFG2, PRIO_ENTRY, 0, 16) + FIELD(HWCFG2, RRID_TRANSL, 16, 16) +REG32(ENTRYOFFSET, 0x14) + FIELD(ENTRYOFFSET, OFFSET, 0, 32) +REG32(MDSTALL, 0x30) + FIELD(MDSTALL, EXEMPT, 0, 1) + FIELD(MDSTALL, MD, 1, 31) +REG32(MDSTALLH, 0x34) + FIELD(MDSTALLH, MD, 0, 32) +REG32(RRIDSCP, 0x38) + FIELD(RRIDSCP, RRID, 0, 16) + FIELD(RRIDSCP, OP, 30, 2) + FIELD(RRIDSCP, STAT, 30, 2) +REG32(MDLCK, 0x40) + FIELD(MDLCK, L, 0, 1) + FIELD(MDLCK, MD, 1, 31) +REG32(MDLCKH, 0x44) + FIELD(MDLCKH, MDH, 0, 32) +REG32(MDCFGLCK, 0x48) + FIELD(MDCFGLCK, L, 0, 1) + FIELD(MDCFGLCK, F, 1, 7) +REG32(ENTRYLCK, 0x4C) + FIELD(ENTRYLCK, L, 0, 1) + FIELD(ENTRYLCK, F, 1, 16) +REG32(ERR_CFG, 0x60) + FIELD(ERR_CFG, L, 0, 1) + FIELD(ERR_CFG, IE, 1, 1) + FIELD(ERR_CFG, RS, 2, 1) + FIELD(ERR_CFG, MSI_EN, 3, 1) + FIELD(ERR_CFG, STALL_VIOLATION_EN, 4, 1) + FIELD(ERR_CFG, MSIDATA, 8, 11) +REG32(ERR_INFO, 0x64) + FIELD(ERR_INFO, V, 0, 1) + FIELD(ERR_INFO, TTYPE, 1, 2) + FIELD(ERR_INFO, MSI_WERR, 3, 1) + FIELD(ERR_INFO, ETYPE, 4, 4) + FIELD(ERR_INFO, SVC, 8, 1) +REG32(ERR_REQADDR, 0x68) + FIELD(ERR_REQADDR, ADDR, 0, 32) +REG32(ERR_REQADDRH, 0x6C) + FIELD(ERR_REQADDRH, ADDRH, 0, 32) +REG32(ERR_REQID, 0x70) + FIELD(ERR_REQID, RRID, 0, 16) + FIELD(ERR_REQID, EID, 16, 16) +REG32(ERR_MFR, 0x74) + FIELD(ERR_MFR, SVW, 0, 16) + FIELD(ERR_MFR, SVI, 16, 12) + FIELD(ERR_MFR, SVS, 31, 1) +REG32(ERR_MSIADDR, 0x78) +REG32(ERR_MSIADDRH, 0x7C) +REG32(MDCFG0, 0x800) + FIELD(MDCFG0, T, 0, 16) +REG32(SRCMD_EN0, 0x1000) + FIELD(SRCMD_EN0, L, 0, 1) + FIELD(SRCMD_EN0, MD, 1, 31) +REG32(SRCMD_ENH0, 0x1004) + FIELD(SRCMD_ENH0, MDH, 0, 32) +REG32(SRCMD_R0, 0x1008) + FIELD(SRCMD_R0, MD, 1, 31) +REG32(SRCMD_RH0, 0x100C) + FIELD(SRCMD_RH0, MDH, 0, 32) +REG32(SRCMD_W0, 0x1010) + FIELD(SRCMD_W0, MD, 1, 31) +REG32(SRCMD_WH0, 0x1014) + FIELD(SRCMD_WH0, MDH, 0, 32) +REG32(SRCMD_PERM0, 0x1000) +REG32(SRCMD_PERMH0, 0x1004) + +FIELD(ENTRY_ADDR, ADDR, 0, 32) +FIELD(ENTRY_ADDRH, ADDRH, 0, 32) + +FIELD(ENTRY_CFG, R, 0, 1) +FIELD(ENTRY_CFG, W, 1, 1) +FIELD(ENTRY_CFG, X, 2, 1) +FIELD(ENTRY_CFG, A, 3, 2) +FIELD(ENTRY_CFG, SIE, 5, 3) +FIELD(ENTRY_CFG, SIRE, 5, 1) +FIELD(ENTRY_CFG, SIWE, 6, 1) +FIELD(ENTRY_CFG, SIXE, 7, 1) +FIELD(ENTRY_CFG, SEE, 8, 3) +FIELD(ENTRY_CFG, SERE, 8, 1) +FIELD(ENTRY_CFG, SEWE, 9, 1) +FIELD(ENTRY_CFG, SEXE, 10, 1) + +FIELD(ENTRY_USER_CFG, IM, 0, 32) + +/* Offsets to SRCMD_EN(i) */ +#define SRCMD_EN_OFFSET 0x0 +#define SRCMD_ENH_OFFSET 0x4 +#define SRCMD_R_OFFSET 0x8 +#define SRCMD_RH_OFFSET 0xC +#define SRCMD_W_OFFSET 0x10 +#define SRCMD_WH_OFFSET 0x14 + +/* Offsets to SRCMD_PERM(i) */ +#define SRCMD_PERM_OFFSET 0x0 +#define SRCMD_PERMH_OFFSET 0x4 + +/* Offsets to ENTRY_ADDR(i) */ +#define ENTRY_ADDR_OFFSET 0x0 +#define ENTRY_ADDRH_OFFSET 0x4 +#define ENTRY_CFG_OFFSET 0x8 +#define ENTRY_USER_CFG_OFFSET 0xC + +#define IOPMP_MAX_MD_NUM 63 +#define IOPMP_MAX_RRID_NUM 32 +#define IOPMP_SRCMDFMT0_MAX_RRID_NUM 65535 +#define IOPMP_SRCMDFMT2_MAX_RRID_NUM 32 +#define IOPMP_MAX_ENTRY_NUM 65535 + +/* The ids of iopmp are temporary */ +#define VENDER_VIRT 0 +#define SPECVER_0_9_2 92 +#define IMPID_0_9_2 92 + +typedef enum { + RS_ERROR, + RS_SUCCESS, +} iopmp_reaction; + +typedef enum { + RWE_ERROR, + RWE_SUCCESS, +} iopmp_write_reaction; + +typedef enum { + RXE_ERROR, + RXE_SUCCESS_VALUE, +} iopmp_exec_reaction; + +typedef enum { + ERR_INFO_TTYPE_NOERROR, + ERR_INFO_TTYPE_READ, + ERR_INFO_TTYPE_WRITE, + ERR_INFO_TTYPE_FETCH +} iopmp_err_info_ttype; + +typedef enum { + ERR_INFO_ETYPE_NOERROR, + ERR_INFO_ETYPE_READ, + ERR_INFO_ETYPE_WRITE, + ERR_INFO_ETYPE_FETCH, + ERR_INFO_ETYPE_PARHIT, + ERR_INFO_ETYPE_NOHIT, + ERR_INFO_ETYPE_RRID, + ERR_INFO_ETYPE_USER, + ERR_INFO_ETYPE_STALL +} iopmp_err_info_etype; + +typedef enum { + IOPMP_ENTRY_NO_HIT, + IOPMP_ENTRY_PAR_HIT, + IOPMP_ENTRY_HIT +} iopmp_entry_hit; + +typedef enum { + IOPMP_AMATCH_OFF, /* Null (off) */ + IOPMP_AMATCH_TOR, /* Top of Range */ + IOPMP_AMATCH_NA4, /* Naturally aligned four-byte region */ + IOPMP_AMATCH_NAPOT /* Naturally aligned power-of-two region */ +} iopmp_am_t; + +typedef enum { + IOPMP_ACCESS_READ = 1, + IOPMP_ACCESS_WRITE = 2, + IOPMP_ACCESS_FETCH = 3 +} iopmp_access_type; + +typedef enum { + IOPMP_NONE = 0, + IOPMP_RO = 1, + IOPMP_WO = 2, + IOPMP_RW = 3, + IOPMP_XO = 4, + IOPMP_RX = 5, + IOPMP_WX = 6, + IOPMP_RWX = 7, +} iopmp_permission; + +typedef enum { + RRIDSCP_OP_QUERY = 0, + RRIDSCP_OP_STALL = 1, + RRIDSCP_OP_NO_STALL = 2, + RRIDSCP_OP_RESERVED = 3, +} rridscp_op; + +typedef enum { + RRIDSCP_STAT_NOT_IMPL = 0, + RRIDSCP_STAT_STALL = 1, + RRIDSCP_STAT_NO_STALL = 2, + RRIDSCP_STAT_RRID_NO_IMPL = 3, +} rridscp_stat; + +typedef struct entry_range { + int md; + /* Index of entry array */ + int start_idx; + int end_idx; +} entry_range; + +static void iopmp_iommu_notify(RISCVIOPMPState *s) +{ + IOMMUTLBEvent event = { + .entry = { + .iova = 0, + .translated_addr = 0, + .addr_mask = -1ULL, + .perm = IOMMU_NONE, + }, + .type = IOMMU_NOTIFIER_UNMAP, + }; + + for (int i = 0; i < s->rrid_num; i++) { + memory_region_notify_iommu(&s->iommu, i, event); + } +} + +static void iopmp_msi_send(RISCVIOPMPState *s) +{ + MemTxResult result; + uint64_t addr = ((uint64_t)(s->regs.err_msiaddrh) << 32) | + s->regs.err_msiaddr; + address_space_stl_le(&address_space_memory, addr, + FIELD_EX32(s->regs.err_cfg, ERR_CFG, MSIDATA), + (MemTxAttrs){.requester_id = s->msi_rrid}, &result); + if (result != MEMTX_OK) { + s->regs.err_info = FIELD_DP32(s->regs.err_info, ERR_INFO, MSI_WERR, 1); + } +} + +static void iopmp_decode_napot(uint64_t a, uint64_t *sa, + uint64_t *ea) +{ + /* + * aaaa...aaa0 8-byte NAPOT range + * aaaa...aa01 16-byte NAPOT range + * aaaa...a011 32-byte NAPOT range + * ... + * aa01...1111 2^XLEN-byte NAPOT range + * a011...1111 2^(XLEN+1)-byte NAPOT range + * 0111...1111 2^(XLEN+2)-byte NAPOT range + * 1111...1111 Reserved + */ + + a = (a << 2) | 0x3; + *sa = a & (a + 1); + *ea = a | (a + 1); +} + +static void iopmp_update_rule(RISCVIOPMPState *s, uint32_t entry_index) +{ + uint8_t this_cfg = s->regs.entry[entry_index].cfg_reg; + uint64_t this_addr = s->regs.entry[entry_index].addr_reg | + ((uint64_t)s->regs.entry[entry_index].addrh_reg << 32); + uint64_t prev_addr = 0u; + uint64_t sa = 0u; + uint64_t ea = 0u; + + if (entry_index >= 1u) { + prev_addr = s->regs.entry[entry_index - 1].addr_reg | + ((uint64_t)s->regs.entry[entry_index - 1].addrh_reg << 32); + } + + switch (FIELD_EX32(this_cfg, ENTRY_CFG, A)) { + case IOPMP_AMATCH_OFF: + sa = 0u; + ea = -1; + break; + + case IOPMP_AMATCH_TOR: + sa = (prev_addr) << 2; /* shift up from [xx:0] to [xx+2:2] */ + ea = ((this_addr) << 2) - 1u; + if (sa > ea) { + sa = ea = 0u; + } + break; + + case IOPMP_AMATCH_NA4: + sa = this_addr << 2; /* shift up from [xx:0] to [xx+2:2] */ + ea = (sa + 4u) - 1u; + break; + + case IOPMP_AMATCH_NAPOT: + iopmp_decode_napot(this_addr, &sa, &ea); + break; + + default: + sa = 0u; + ea = 0u; + break; + } + + s->entry_addr[entry_index].sa = sa; + s->entry_addr[entry_index].ea = ea; + iopmp_iommu_notify(s); +} + +static uint64_t iopmp_read(void *opaque, hwaddr addr, unsigned size) +{ + RISCVIOPMPState *s = RISCV_IOPMP(opaque); + uint32_t rz = 0; + uint32_t offset, idx; + /* Start value for ERR_MFR.svi */ + uint16_t svi_s; + + switch (addr) { + case A_VERSION: + rz = FIELD_DP32(rz, VERSION, VENDOR, VENDER_VIRT); + rz = FIELD_DP32(rz, VERSION, SPECVER, SPECVER_0_9_2); + break; + case A_IMPLEMENTATION: + rz = IMPID_0_9_2; + break; + case A_HWCFG0: + rz = FIELD_DP32(rz, HWCFG0, MDCFG_FMT, s->mdcfg_fmt); + rz = FIELD_DP32(rz, HWCFG0, SRCMD_FMT, s->srcmd_fmt); + rz = FIELD_DP32(rz, HWCFG0, TOR_EN, s->tor_en); + rz = FIELD_DP32(rz, HWCFG0, SPS_EN, s->sps_en); + rz = FIELD_DP32(rz, HWCFG0, USER_CFG_EN, 0); + rz = FIELD_DP32(rz, HWCFG0, PRIENT_PROG, s->prient_prog); + rz = FIELD_DP32(rz, HWCFG0, RRID_TRANSL_EN, s->rrid_transl_en); + rz = FIELD_DP32(rz, HWCFG0, RRID_TRANSL_PROG, s->rrid_transl_prog); + rz = FIELD_DP32(rz, HWCFG0, CHK_X, s->chk_x); + rz = FIELD_DP32(rz, HWCFG0, NO_X, s->no_x); + rz = FIELD_DP32(rz, HWCFG0, NO_W, s->no_w); + rz = FIELD_DP32(rz, HWCFG0, STALL_EN, s->stall_en); + rz = FIELD_DP32(rz, HWCFG0, PEIS, s->peis); + rz = FIELD_DP32(rz, HWCFG0, PEES, s->pees); + rz = FIELD_DP32(rz, HWCFG0, MFR_EN, s->mfr_en); + rz = FIELD_DP32(rz, HWCFG0, MD_ENTRY_NUM, s->md_entry_num); + rz = FIELD_DP32(rz, HWCFG0, MD_NUM, s->md_num); + rz = FIELD_DP32(rz, HWCFG0, ADDRH_EN, 1); + rz = FIELD_DP32(rz, HWCFG0, ENABLE, s->enable); + break; + case A_HWCFG1: + rz = FIELD_DP32(rz, HWCFG1, RRID_NUM, s->rrid_num); + rz = FIELD_DP32(rz, HWCFG1, ENTRY_NUM, s->entry_num); + break; + case A_HWCFG2: + rz = FIELD_DP32(rz, HWCFG2, PRIO_ENTRY, s->prio_entry); + rz = FIELD_DP32(rz, HWCFG2, RRID_TRANSL, s->rrid_transl); + break; + case A_ENTRYOFFSET: + rz = s->entry_offset; + break; + case A_MDSTALL: + if (s->stall_en) { + rz = s->regs.mdstall; + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case A_MDSTALLH: + if (s->stall_en && s->md_num > 31) { + rz = s->regs.mdstallh; + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case A_RRIDSCP: + if (s->stall_en) { + rz = s->regs.rridscp; + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case A_ERR_CFG: + rz = s->regs.err_cfg; + break; + case A_MDLCK: + if (s->srcmd_fmt == 1) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } else { + rz = s->regs.mdlck; + } + break; + case A_MDLCKH: + if (s->md_num < 31 || s->srcmd_fmt == 1) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } else { + rz = s->regs.mdlckh; + } + break; + case A_MDCFGLCK: + if (s->mdcfg_fmt != 0) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + break; + } + rz = s->regs.mdcfglck; + break; + case A_ENTRYLCK: + rz = s->regs.entrylck; + break; + case A_ERR_REQADDR: + rz = s->regs.err_reqaddr & UINT32_MAX; + break; + case A_ERR_REQADDRH: + rz = s->regs.err_reqaddr >> 32; + break; + case A_ERR_REQID: + rz = s->regs.err_reqid; + break; + case A_ERR_INFO: + rz = s->regs.err_info; + break; + case A_ERR_MFR: + if (!s->mfr_en) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + break; + } + svi_s = s->svi; + s->regs.err_info = FIELD_DP32(s->regs.err_info, ERR_INFO, SVC, 0); + while (1) { + if (s->svw[s->svi]) { + if (rz == 0) { + /* First svw is found */ + rz = FIELD_DP32(rz, ERR_MFR, SVW, s->svw[s->svi]); + rz = FIELD_DP32(rz, ERR_MFR, SVI, s->svi); + rz = FIELD_DP32(rz, ERR_MFR, SVS, 1); + /* Clear svw after read */ + s->svw[s->svi] = 0; + } else { + /* Other subsequent violation exists */ + s->regs.err_info = FIELD_DP32(s->regs.err_info, ERR_INFO, + SVC, 1); + break; + } + } + s->svi++; + if (s->svi > (s->rrid_num / 16) + 1) { + s->svi = 0; + } + if (svi_s == s->svi) { + /* rounded back to the same value */ + break; + } + } + /* Set svi for next read */ + s->svi = FIELD_DP32(rz, ERR_MFR, SVI, s->svi); + break; + case A_ERR_MSIADDR: + rz = s->regs.err_msiaddr; + break; + case A_ERR_MSIADDRH: + rz = s->regs.err_msiaddrh; + break; + + default: + if (s->mdcfg_fmt == 0 && + addr >= A_MDCFG0 && + addr <= A_MDCFG0 + 4 * (s->md_num - 1)) { + offset = addr - A_MDCFG0; + if (offset % 4) { + rz = 0; + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } else { + idx = offset >> 2; + rz = s->regs.mdcfg[idx]; + } + } else if (s->srcmd_fmt == 0 && + addr >= A_SRCMD_EN0 && + addr <= A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) { + offset = addr - A_SRCMD_EN0; + idx = offset >> 5; + offset &= 0x1f; + + if (s->sps_en || offset <= SRCMD_ENH_OFFSET) { + switch (offset) { + case SRCMD_EN_OFFSET: + rz = s->regs.srcmd_en[idx]; + break; + case SRCMD_ENH_OFFSET: + if (s->md_num > 31) { + rz = s->regs.srcmd_enh[idx]; + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case SRCMD_R_OFFSET: + rz = s->regs.srcmd_r[idx]; + break; + case SRCMD_RH_OFFSET: + if (s->md_num > 31) { + rz = s->regs.srcmd_rh[idx]; + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case SRCMD_W_OFFSET: + rz = s->regs.srcmd_w[idx]; + break; + case SRCMD_WH_OFFSET: + if (s->md_num > 31) { + rz = s->regs.srcmd_wh[idx]; + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + default: + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + break; + } + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + } else if (s->srcmd_fmt == 2 && + addr >= A_SRCMD_PERM0 && + addr <= A_SRCMD_PERMH0 + 32 * (s->md_num - 1)) { + offset = addr - A_SRCMD_PERM0; + idx = offset >> 5; + offset &= 0x1f; + switch (offset) { + case SRCMD_PERM_OFFSET: + rz = s->regs.srcmd_perm[idx]; + break; + case SRCMD_PERMH_OFFSET: + if (s->rrid_num > 16) { + rz = s->regs.srcmd_permh[idx]; + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + default: + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + break; + } + } else if (addr >= s->entry_offset && + addr <= s->entry_offset + ENTRY_USER_CFG_OFFSET + + 16 * (s->entry_num - 1)) { + offset = addr - s->entry_offset; + idx = offset >> 4; + offset &= 0xf; + + switch (offset) { + case ENTRY_ADDR_OFFSET: + rz = s->regs.entry[idx].addr_reg; + break; + case ENTRY_ADDRH_OFFSET: + rz = s->regs.entry[idx].addrh_reg; + break; + case ENTRY_CFG_OFFSET: + rz = s->regs.entry[idx].cfg_reg; + break; + default: + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + break; + } + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + } + trace_iopmp_read(addr, rz); + return rz; +} + +static void update_rrid_stall(RISCVIOPMPState *s) +{ + bool exempt = FIELD_EX32(s->regs.mdstall, MDSTALL, EXEMPT); + uint64_t stall_by_md = ((uint64_t)s->regs.mdstall | + ((uint64_t)s->regs.mdstallh << 32)) >> 1; + uint64_t srcmd_en; + bool reduction_or; + if (s->srcmd_fmt != 2) { + for (int rrid = 0; rrid < s->rrid_num; rrid++) { + srcmd_en = ((uint64_t)s->regs.srcmd_en[rrid] | + ((uint64_t)s->regs.srcmd_enh[rrid] << 32)) >> 1; + reduction_or = 0; + if (srcmd_en & stall_by_md) { + reduction_or = 1; + } + s->rrid_stall[rrid] = exempt ^ reduction_or; + } + } else { + for (int rrid = 0; rrid < s->rrid_num; rrid++) { + if (stall_by_md) { + s->rrid_stall[rrid] = 1; + } else { + s->rrid_stall[rrid] = 0; + } + } + } + iopmp_iommu_notify(s); +} + +static inline void resume_stall(RISCVIOPMPState *s) +{ + for (int rrid = 0; rrid < s->rrid_num; rrid++) { + s->rrid_stall[rrid] = 0; + } + iopmp_iommu_notify(s); +} + +static void +iopmp_write(void *opaque, hwaddr addr, uint64_t value, unsigned size) +{ + RISCVIOPMPState *s = RISCV_IOPMP(opaque); + uint32_t offset, idx; + uint32_t value32 = value; + uint64_t mdlck; + uint32_t value_f; + uint32_t rrid; + uint32_t op; + trace_iopmp_write(addr, value32); + + switch (addr) { + case A_VERSION: /* RO */ + break; + case A_IMPLEMENTATION: /* RO */ + break; + case A_HWCFG0: + if (FIELD_EX32(value32, HWCFG0, RRID_TRANSL_PROG)) { + /* W1C */ + s->rrid_transl_prog = 0; + } + if (FIELD_EX32(value32, HWCFG0, PRIENT_PROG)) { + /* W1C */ + s->prient_prog = 0; + } + if (!s->enable && s->mdcfg_fmt == 2) { + /* Locked by enable bit */ + s->md_entry_num = FIELD_EX32(value32, HWCFG0, MD_ENTRY_NUM); + } + if (FIELD_EX32(value32, HWCFG0, ENABLE)) { + /* W1S */ + s->enable = 1; + iopmp_iommu_notify(s); + } + break; + case A_HWCFG1: /* RO */ + break; + case A_HWCFG2: + if (s->prient_prog) { + s->prio_entry = FIELD_EX32(value32, HWCFG2, PRIO_ENTRY); + iopmp_iommu_notify(s); + } + if (s->rrid_transl_prog) { + s->rrid_transl = FIELD_EX32(value32, HWCFG2, RRID_TRANSL); + iopmp_iommu_notify(s); + } + break; + case A_ENTRYOFFSET: + break; + case A_MDSTALL: + if (s->stall_en) { + s->regs.mdstall = value32; + if (value32) { + s->is_stalled = 1; + } else { + /* Resume if stall, stallh == 0 */ + if (s->regs.mdstallh == 0) { + s->is_stalled = 0; + } + } + update_rrid_stall(s); + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case A_MDSTALLH: + if (s->stall_en) { + s->regs.mdstallh = value32; + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case A_RRIDSCP: + if (s->stall_en) { + rrid = FIELD_EX32(value32, RRIDSCP, RRID); + op = FIELD_EX32(value32, RRIDSCP, OP); + if (op == RRIDSCP_OP_RESERVED) { + break; + } + s->regs.rridscp = value32; + if (rrid > s->rrid_num) { + s->regs.rridscp = FIELD_DP32(s->regs.rridscp, RRIDSCP, STAT, + RRIDSCP_STAT_RRID_NO_IMPL); + break; + } + switch (op) { + case RRIDSCP_OP_QUERY: + if (s->is_stalled) { + s->regs.rridscp = + FIELD_DP32(s->regs.rridscp, RRIDSCP, STAT, + 0x2 >> s->rrid_stall[rrid]); + } else { + s->regs.rridscp = FIELD_DP32(s->regs.rridscp, RRIDSCP, + STAT, + RRIDSCP_STAT_NO_STALL); + } + break; + case RRIDSCP_OP_STALL: + s->rrid_stall[rrid] = 1; + break; + case RRIDSCP_OP_NO_STALL: + s->rrid_stall[rrid] = 0; + break; + default: + break; + } + if (s->is_stalled) { + iopmp_iommu_notify(s); + } + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case A_ERR_CFG: + if (!FIELD_EX32(s->regs.err_cfg, ERR_CFG, L)) { + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, L, + FIELD_EX32(value32, ERR_CFG, L)); + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, IE, + FIELD_EX32(value32, ERR_CFG, IE)); + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, RS, + FIELD_EX32(value32, ERR_CFG, RS)); + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, MSI_EN, + FIELD_EX32(value32, ERR_CFG, MSI_EN)); + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, + STALL_VIOLATION_EN, FIELD_EX32(value32, ERR_CFG, + STALL_VIOLATION_EN)); + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, MSIDATA, + FIELD_EX32(value32, ERR_CFG, MSIDATA)); + } + break; + case A_MDLCK: + if (s->srcmd_fmt == 1) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } else if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) { + /* sticky to 1 */ + s->regs.mdlck |= value32; + if (s->md_num <= 31) { + s->regs.mdlck = extract32(s->regs.mdlck, 0, s->md_num + 1); + } + } + break; + case A_MDLCKH: + if (s->md_num < 31 || s->srcmd_fmt == 1) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } else if (!FIELD_EX32(s->regs.mdlck, MDLCK, L)) { + /* sticky to 1 */ + s->regs.mdlckh |= value32; + s->regs.mdlck = extract32(s->regs.mdlck, 0, s->md_num - 31); + } + break; + case A_MDCFGLCK: + if (s->mdcfg_fmt != 0) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + break; + } + if (!FIELD_EX32(s->regs.mdcfglck, MDCFGLCK, L)) { + value_f = FIELD_EX32(value32, MDCFGLCK, F); + if (value_f > FIELD_EX32(s->regs.mdcfglck, MDCFGLCK, F)) { + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, F, + value_f); + } + s->regs.mdcfglck = FIELD_DP32(s->regs.mdcfglck, MDCFGLCK, L, + FIELD_EX32(value32, MDCFGLCK, L)); + } + break; + case A_ENTRYLCK: + if (!(FIELD_EX32(s->regs.entrylck, ENTRYLCK, L))) { + value_f = FIELD_EX32(value32, ENTRYLCK, F); + if (value_f > FIELD_EX32(s->regs.entrylck, ENTRYLCK, F)) { + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, F, + value_f); + } + s->regs.entrylck = FIELD_DP32(s->regs.entrylck, ENTRYLCK, L, + FIELD_EX32(value32, ENTRYLCK, L)); + } + case A_ERR_REQADDR: /* RO */ + break; + case A_ERR_REQADDRH: /* RO */ + break; + case A_ERR_REQID: /* RO */ + break; + case A_ERR_INFO: + if (FIELD_EX32(value32, ERR_INFO, V)) { + s->regs.err_info = FIELD_DP32(s->regs.err_info, ERR_INFO, V, 0); + qemu_set_irq(s->irq, 0); + } + if (FIELD_EX32(value32, ERR_INFO, MSI_WERR)) { + s->regs.err_info = FIELD_DP32(s->regs.err_info, ERR_INFO, MSI_WERR, + 0); + } + break; + case A_ERR_MFR: + s->svi = FIELD_EX32(value32, ERR_MFR, SVI); + break; + case A_ERR_MSIADDR: + if (!FIELD_EX32(s->regs.err_cfg, ERR_CFG, L)) { + s->regs.err_msiaddr = value32; + } + break; + + case A_ERR_MSIADDRH: + if (!FIELD_EX32(s->regs.err_cfg, ERR_CFG, L)) { + s->regs.err_msiaddrh = value32; + } + break; + + default: + if (s->mdcfg_fmt == 0 && + addr >= A_MDCFG0 && + addr <= A_MDCFG0 + 4 * (s->md_num - 1)) { + offset = addr - A_MDCFG0; + if (offset % 4) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } else { + idx = offset >> 2; + s->regs.mdcfg[idx] = FIELD_EX32(value32, MDCFG0, T); + iopmp_iommu_notify(s); + } + } else if (s->srcmd_fmt == 0 && + addr >= A_SRCMD_EN0 && + addr <= A_SRCMD_WH0 + 32 * (s->rrid_num - 1)) { + offset = addr - A_SRCMD_EN0; + idx = offset >> 5; + offset &= 0x1f; + + if (offset % 4 || (!s->sps_en && offset > SRCMD_ENH_OFFSET)) { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } else if (FIELD_EX32(s->regs.srcmd_en[idx], SRCMD_EN0, L) == 0) { + /* MD field is protected by mdlck */ + value32 = (value32 & ~s->regs.mdlck) | + (s->regs.srcmd_en[idx] & s->regs.mdlck); + iopmp_iommu_notify(s); + switch (offset) { + case SRCMD_EN_OFFSET: + s->regs.srcmd_en[idx] = + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, L, + FIELD_EX32(value32, SRCMD_EN0, L)); + s->regs.srcmd_en[idx] = + FIELD_DP32(s->regs.srcmd_en[idx], SRCMD_EN0, MD, + FIELD_EX32(value32, SRCMD_EN0, MD)); + if (s->md_num <= 31) { + s->regs.srcmd_en[idx] = extract32(s->regs.srcmd_en[idx], + 0, s->md_num + 1); + } + break; + case SRCMD_ENH_OFFSET: + if (s->md_num > 31) { + s->regs.srcmd_enh[idx] = value32; + s->regs.srcmd_enh[idx] = + extract32(s->regs.srcmd_enh[idx], 0, + s->md_num - 31); + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case SRCMD_R_OFFSET: + s->regs.srcmd_r[idx] = + FIELD_DP32(s->regs.srcmd_r[idx], SRCMD_R0, MD, + FIELD_EX32(value32, SRCMD_R0, MD)); + if (s->md_num <= 31) { + s->regs.srcmd_r[idx] = extract32(s->regs.srcmd_r[idx], + 0, s->md_num + 1); + } + break; + case SRCMD_RH_OFFSET: + if (s->md_num > 31) { + s->regs.srcmd_rh[idx] = value32; + s->regs.srcmd_rh[idx] = + extract32(s->regs.srcmd_rh[idx], 0, + s->md_num - 31); + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + case SRCMD_W_OFFSET: + s->regs.srcmd_w[idx] = + FIELD_DP32(s->regs.srcmd_w[idx], SRCMD_W0, MD, + FIELD_EX32(value32, SRCMD_W0, MD)); + if (s->md_num <= 31) { + s->regs.srcmd_w[idx] = extract32(s->regs.srcmd_w[idx], + 0, s->md_num + 1); + } + break; + case SRCMD_WH_OFFSET: + if (s->md_num > 31) { + s->regs.srcmd_wh[idx] = value32; + s->regs.srcmd_wh[idx] = + extract32(s->regs.srcmd_wh[idx], 0, + s->md_num - 31); + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + default: + break; + } + } + } else if (s->srcmd_fmt == 2 && + addr >= A_SRCMD_PERM0 && + addr <= A_SRCMD_PERMH0 + 32 * (s->md_num - 1)) { + offset = addr - A_SRCMD_PERM0; + idx = offset >> 5; + offset &= 0x1f; + /* mdlck lock bit is removed */ + mdlck = ((uint64_t)s->regs.mdlck | + ((uint64_t)s->regs.mdlckh << 32)) >> 1; + iopmp_iommu_notify(s); + switch (offset) { + case SRCMD_PERM_OFFSET: + /* srcmd_perm[md] is protect by mdlck */ + if (((mdlck >> idx) & 0x1) == 0) { + s->regs.srcmd_perm[idx] = value32; + } + if (s->rrid_num <= 16) { + s->regs.srcmd_perm[idx] = extract32(s->regs.srcmd_perm[idx], + 0, 2 * s->rrid_num); + } + break; + case SRCMD_PERMH_OFFSET: + if (s->rrid_num > 16) { + if (((mdlck >> idx) & 0x1) == 0) { + s->regs.srcmd_permh[idx] = value32; + } + s->regs.srcmd_permh[idx] = + extract32(s->regs.srcmd_permh[idx], 0, + 2 * (s->rrid_num - 16)); + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + } + break; + default: + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + break; + } + } else if (addr >= s->entry_offset && + addr <= s->entry_offset + ENTRY_USER_CFG_OFFSET + + 16 * (s->entry_num - 1)) { + offset = addr - s->entry_offset; + idx = offset >> 4; + offset &= 0xf; + + /* index < ENTRYLCK_F is protected */ + if (idx >= FIELD_EX32(s->regs.entrylck, ENTRYLCK, F)) { + switch (offset) { + case ENTRY_ADDR_OFFSET: + s->regs.entry[idx].addr_reg = value32; + break; + case ENTRY_ADDRH_OFFSET: + s->regs.entry[idx].addrh_reg = value32; + break; + case ENTRY_CFG_OFFSET: + s->regs.entry[idx].cfg_reg = value32; + if (!s->tor_en && + FIELD_EX32(s->regs.entry[idx].cfg_reg, + ENTRY_CFG, A) == IOPMP_AMATCH_TOR) { + s->regs.entry[idx].cfg_reg = + FIELD_DP32(s->regs.entry[idx].cfg_reg, ENTRY_CFG, A, + IOPMP_AMATCH_OFF); + } + if (!s->peis) { + s->regs.entry[idx].cfg_reg = + FIELD_DP32(s->regs.entry[idx].cfg_reg, ENTRY_CFG, + SIE, 0); + } + if (!s->pees) { + s->regs.entry[idx].cfg_reg = + FIELD_DP32(s->regs.entry[idx].cfg_reg, ENTRY_CFG, + SEE, 0); + } + break; + case ENTRY_USER_CFG_OFFSET: + /* Does not support user customized permission */ + break; + default: + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", + __func__, (int)addr); + break; + } + iopmp_update_rule(s, idx); + if (idx + 1 < s->entry_num && + FIELD_EX32(s->regs.entry[idx + 1].cfg_reg, ENTRY_CFG, A) == + IOPMP_AMATCH_TOR) { + iopmp_update_rule(s, idx + 1); + } + } + } else { + qemu_log_mask(LOG_GUEST_ERROR, "%s: Bad addr %x\n", __func__, + (int)addr); + } + } +} + +static void apply_sps_permission(RISCVIOPMPState *s, int rrid, int md, int *cfg) +{ + uint64_t srcmd_r, srcmd_w; + srcmd_r = ((uint64_t)s->regs.srcmd_rh[rrid]) << 32 | s->regs.srcmd_r[rrid]; + srcmd_w = ((uint64_t)s->regs.srcmd_wh[rrid]) << 32 | s->regs.srcmd_w[rrid]; + if (((srcmd_r >> (md + 1)) & 0x1) == 0) { + /* remove r&x permission and error suppression */ + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, R, 0); + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, X, 0); + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, SIRE, 0); + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, SERE, 0); + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, SIXE, 0); + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, SEXE, 0); + } + if (((srcmd_w >> (md + 1)) & 0x1) == 0) { + /* remove w permission and error suppression */ + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, W, 0); + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, SIWE, 0); + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, SEWE, 0); + } +} + +static void apply_srcmdperm(RISCVIOPMPState *s, int rrid, int md, int *cfg) +{ + uint64_t srcmd_perm = ((uint64_t)s->regs.srcmd_permh[md]) << 32 | + s->regs.srcmd_perm[md]; + + if (((srcmd_perm >> (2 * rrid)) & 0x1)) { + /* add r&x permission */ + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, R, 1); + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, X, 1); + } + if (((srcmd_perm >> (2 * rrid + 1)) & 0x1)) { + /* add w permission */ + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, W, 1); + } +} + +static inline void apply_no_chk_x(int *cfg) +{ + /* Use read permission for fetch */ + *cfg = FIELD_DP32(*cfg, ENTRY_CFG, X, FIELD_EX32(*cfg, ENTRY_CFG, R)); +} + +/* + * entry_range_list: The entry ranges from SRCMD and MDCFG to match + * entry_idx: matched priority entry index or first non-priority entry index + * cfg: entry cfg for matched priority entry and overlap permission and + * supression of matched on-priority entries + * iopmp_tlb_size: If entire tlb has the same permission, the value is + * TARGET_PAGE_SIZE, otherwise is 1. + */ +static iopmp_entry_hit match_entry_range(RISCVIOPMPState *s, int rrid, + GSList *entry_range_list, + hwaddr sa, hwaddr ea, + int *entry_idx, int *cfg, + hwaddr *iopmp_tlb_size) +{ + entry_range *range; + iopmp_entry_hit result = IOPMP_ENTRY_NO_HIT; + *iopmp_tlb_size = TARGET_PAGE_SIZE; + *cfg = 0; + int i = 0; + int s_idx, e_idx; + hwaddr tlb_sa = sa & ~(TARGET_PAGE_SIZE - 1); + hwaddr tlb_ea = (ea & ~(TARGET_PAGE_SIZE - 1)) + TARGET_PAGE_SIZE - 1; + int tlb_cfg = 0; + int md; + int curr_cfg; + + while (entry_range_list) { + range = (entry_range *)entry_range_list->data; + s_idx = range->start_idx; + e_idx = range->end_idx; + md = range->md; + if (e_idx > s->entry_num) { + e_idx = s->entry_num; + } + for (i = s_idx; i < e_idx; i++) { + if (FIELD_EX32(s->regs.entry[i].cfg_reg, ENTRY_CFG, A) == + IOPMP_AMATCH_OFF) { + continue; + } + + if (i < s->prio_entry) { + if (iopmp_tlb_size != NULL && + *iopmp_tlb_size == TARGET_PAGE_SIZE) { + if ((s->entry_addr[i].sa >= tlb_sa && + s->entry_addr[i].sa <= tlb_ea) || + (s->entry_addr[i].ea >= tlb_sa && + s->entry_addr[i].ea <= tlb_ea)) { + /* + * A higher priority entry in the same TLB page, + * but it does not occupy the entire page. + */ + *iopmp_tlb_size = 1; + } + } + if (sa >= s->entry_addr[i].sa && + sa <= s->entry_addr[i].ea) { + if (ea >= s->entry_addr[i].sa && + ea <= s->entry_addr[i].ea) { + *entry_idx = i; + *cfg = s->regs.entry[i].cfg_reg; + if (s->sps_en) { + apply_sps_permission(s, rrid, md, cfg); + } + if (s->srcmd_fmt == 2) { + apply_srcmdperm(s, rrid, md, cfg); + } + if (!s->chk_x) { + apply_no_chk_x(cfg); + } + return IOPMP_ENTRY_HIT; + } else { + *entry_idx = i; + return IOPMP_ENTRY_PAR_HIT; + } + } else if (ea >= s->entry_addr[i].sa && + ea <= s->entry_addr[i].ea) { + *entry_idx = i; + return IOPMP_ENTRY_PAR_HIT; + } else if (sa < s->entry_addr[i].sa && + ea > s->entry_addr[i].ea) { + *entry_idx = i; + return IOPMP_ENTRY_PAR_HIT; + } + } else { + /* Try to check entire tlb permission */ + if (*iopmp_tlb_size != 1 && + tlb_sa >= s->entry_addr[i].sa && + tlb_sa <= s->entry_addr[i].ea) { + if (tlb_ea >= s->entry_addr[i].sa && + tlb_ea <= s->entry_addr[i].ea) { + result = IOPMP_ENTRY_HIT; + curr_cfg = s->regs.entry[i].cfg_reg; + if (*entry_idx == -1) { + /* record first matched non-priorty entry */ + *entry_idx = i; + } + if (s->sps_en) { + apply_sps_permission(s, rrid, md, &curr_cfg); + } + if (s->srcmd_fmt == 2) { + apply_srcmdperm(s, rrid, md, &curr_cfg); + } + if (!s->chk_x) { + apply_no_chk_x(&curr_cfg); + } + tlb_cfg |= curr_cfg; + if ((tlb_cfg & 0x7) == 0x7) { + /* Already have RWX permission */ + *cfg = tlb_cfg; + return result; + } + } + } + if (sa >= s->entry_addr[i].sa && + sa <= s->entry_addr[i].ea) { + if (ea >= s->entry_addr[i].sa && + ea <= s->entry_addr[i].ea) { + result = IOPMP_ENTRY_HIT; + if (*entry_idx == -1) { + /* record first matched non-priorty entry */ + *entry_idx = i; + } + curr_cfg = s->regs.entry[i].cfg_reg; + if (s->sps_en) { + apply_sps_permission(s, rrid, md, &curr_cfg); + } + if (s->srcmd_fmt == 2) { + apply_srcmdperm(s, rrid, md, &curr_cfg); + } + if (!s->chk_x) { + apply_no_chk_x(&curr_cfg); + } + *cfg |= curr_cfg; + if ((*cfg & 0x7) == 0x7 && *iopmp_tlb_size == 1) { + /* + * Already have RWX permission and a higher priority + * entry in the same TLB page, checking the + * next non-priority entry is unnecessary + */ + return result; + } + } + } + } + } + entry_range_list = entry_range_list->next; + } + if (result == IOPMP_ENTRY_HIT && (*cfg & 0x7) != (tlb_cfg & 0x7)) { + /* + * For non-priority entry hit, if the tlb permssion is different to + * matched entries permssion, reduce iopmp_tlb_size + */ + *iopmp_tlb_size = 1; + } + return result; +} + +static void entry_range_list_data_free(gpointer data) +{ + entry_range *range = (entry_range *)data; + g_free(range); +} + +static iopmp_entry_hit match_entry_srcmd(RISCVIOPMPState *s, int rrid, + hwaddr start_addr, hwaddr end_addr, + int *match_entry_idx, int *cfg, + hwaddr *iopmp_tlb_size) +{ + iopmp_entry_hit result = IOPMP_ENTRY_NO_HIT; + GSList *entry_range_list = NULL; + uint64_t srcmd_en; + int k; + entry_range *range; + int md_idx; + if (s->srcmd_fmt == 1) { + range = g_malloc(sizeof(*range)); + md_idx = rrid; + range->md = md_idx; + if (s->mdcfg_fmt == 0) { + if (md_idx > 0) { + range->start_idx = FIELD_EX32(s->regs.mdcfg[md_idx - 1], + MDCFG0, T); + } else { + range->start_idx = 0; + } + range->end_idx = FIELD_EX32(s->regs.mdcfg[md_idx], MDCFG0, T); + } else { + k = s->md_entry_num + 1; + range->start_idx = md_idx * k; + range->end_idx = (md_idx + 1) * k; + } + entry_range_list = g_slist_append(entry_range_list, range); + } else { + for (md_idx = 0; md_idx < s->md_num; md_idx++) { + srcmd_en = ((uint64_t)s->regs.srcmd_en[rrid] | + ((uint64_t)s->regs.srcmd_enh[rrid] << 32)) >> 1; + range = NULL; + if (s->srcmd_fmt == 2) { + /* All entries are needed to be checked in srcmd_fmt2 */ + srcmd_en = -1; + } + if (srcmd_en & (1ULL << md_idx)) { + range = g_malloc(sizeof(*range)); + range->md = md_idx; + if (s->mdcfg_fmt == 0) { + if (md_idx > 0) { + range->start_idx = FIELD_EX32(s->regs.mdcfg[md_idx - 1], + MDCFG0, T); + } else { + range->start_idx = 0; + } + range->end_idx = FIELD_EX32(s->regs.mdcfg[md_idx], + MDCFG0, T); + } else { + k = s->md_entry_num + 1; + range->start_idx = md_idx * k; + range->end_idx = (md_idx + 1) * k; + } + } + /* + * There is no more memory domain after it enconter an invalid + * mdcfg. Note that the behavior of mdcfg(t+1).f < mdcfg(t).f is + * implementation-dependent. + */ + if (range != NULL) { + if (range->end_idx < range->start_idx) { + break; + } + entry_range_list = g_slist_append(entry_range_list, range); + } + } + } + result = match_entry_range(s, rrid, entry_range_list, start_addr, end_addr, + match_entry_idx, cfg, iopmp_tlb_size); + g_slist_free_full(entry_range_list, entry_range_list_data_free); + return result; +} + +static MemTxResult iopmp_error_reaction(RISCVIOPMPState *s, uint32_t rrid, + uint32_t eid, hwaddr addr, + uint32_t etype, uint32_t ttype, + uint32_t cfg, uint64_t *data) +{ + uint32_t error_id = 0; + uint32_t error_info = 0; + int offset; + /* interrupt enable regarding the access */ + int ie; + /* bus error enable */ + int be; + int error_record; + + if (etype >= ERR_INFO_ETYPE_READ && etype <= ERR_INFO_ETYPE_WRITE) { + offset = etype - ERR_INFO_ETYPE_READ; + ie = (FIELD_EX32(s->regs.err_cfg, ERR_CFG, IE) && + !extract32(cfg, R_ENTRY_CFG_SIRE_SHIFT + offset, 1)); + be = (!FIELD_EX32(s->regs.err_cfg, ERR_CFG, RS) && + !extract32(cfg, R_ENTRY_CFG_SERE_SHIFT + offset, 1)); + } else { + ie = extract32(s->regs.err_cfg, R_ERR_CFG_IE_SHIFT, 1); + be = !extract32(s->regs.err_cfg, R_ERR_CFG_RS_SHIFT, 1); + } + error_record = (ie | be) && !(s->transaction_state[rrid].running && + s->transaction_state[rrid].error_reported); + if (error_record) { + if (s->transaction_state[rrid].running) { + s->transaction_state[rrid].error_reported = true; + } + /* Update error information if the error is not suppressed */ + if (!FIELD_EX32(s->regs.err_info, ERR_INFO, V)) { + error_id = FIELD_DP32(error_id, ERR_REQID, EID, eid); + error_id = FIELD_DP32(error_id, ERR_REQID, RRID, rrid); + error_info = FIELD_DP32(error_info, ERR_INFO, ETYPE, etype); + error_info = FIELD_DP32(error_info, ERR_INFO, TTYPE, ttype); + s->regs.err_info = error_info; + s->regs.err_info = FIELD_DP32(s->regs.err_info, ERR_INFO, V, 1); + s->regs.err_reqid = error_id; + /* addr[LEN+2:2] */ + s->regs.err_reqaddr = addr >> 2; + if (ie) { + if (FIELD_EX32(s->regs.err_cfg, ERR_CFG, MSI_EN)) { + iopmp_msi_send(s); + } else { + qemu_set_irq(s->irq, 1); + } + } + } else if (s->mfr_en) { + s->svw[rrid / 16] |= (1 << (rrid % 16)); + s->regs.err_info = FIELD_DP32(s->regs.err_info, ERR_INFO, SVC, 1); + } + } + if (be) { + return MEMTX_ERROR; + } else { + if (data) { + *data = s->err_rdata; + } + return MEMTX_OK; + } +} + +static IOMMUTLBEntry iopmp_translate(IOMMUMemoryRegion *iommu, hwaddr addr, + IOMMUAccessFlags flags, int iommu_idx) +{ + int rrid = iommu_idx; + RISCVIOPMPState *s = RISCV_IOPMP(container_of(iommu, + RISCVIOPMPState, iommu)); + hwaddr start_addr, end_addr; + int entry_idx = -1; + hwaddr iopmp_tlb_size = TARGET_PAGE_SIZE; + int match_cfg = 0; + iopmp_entry_hit result; + iopmp_permission iopmp_perm; + bool lock = false; + IOMMUTLBEntry entry = { + .target_as = &s->downstream_as, + .iova = addr, + .translated_addr = addr, + .addr_mask = 0, + .perm = IOMMU_NONE, + }; + + if (!s->enable) { + /* Bypass IOPMP */ + entry.addr_mask = TARGET_PAGE_SIZE - 1, + entry.perm = IOMMU_RW; + return entry; + } + + /* unknown RRID */ + if (rrid >= s->rrid_num) { + entry.target_as = &s->blocked_rwx_as; + entry.perm = IOMMU_RW; + return entry; + } + + if (s->is_stalled && s->rrid_stall[rrid]) { + if (FIELD_EX32(s->regs.err_cfg, ERR_CFG, STALL_VIOLATION_EN)) { + entry.target_as = &s->blocked_rwx_as; + entry.perm = IOMMU_RW; + return entry; + } else { + if (bql_locked()) { + bql_unlock(); + lock = true; + } + while (s->is_stalled && s->rrid_stall[rrid]) { + ; + } + if (lock) { + bql_lock(); + } + } + } + + if (s->transaction_state[rrid].running == true) { + start_addr = s->transaction_state[rrid].start_addr; + end_addr = s->transaction_state[rrid].end_addr; + } else { + /* No transaction information, use the same address */ + start_addr = addr; + end_addr = addr; + } + result = match_entry_srcmd(s, rrid, start_addr, end_addr, &entry_idx, + &match_cfg, &iopmp_tlb_size); + entry.addr_mask = iopmp_tlb_size - 1; + /* Remove permission for no_x, no_w*/ + if (s->chk_x && s->no_x) { + match_cfg = FIELD_DP32(match_cfg, ENTRY_CFG, X, 0); + } + if (s->no_w) { + match_cfg = FIELD_DP32(match_cfg, ENTRY_CFG, W, 0); + } + if (result == IOPMP_ENTRY_HIT) { + iopmp_perm = match_cfg & IOPMP_RWX; + if (flags) { + if ((iopmp_perm & flags) == 0) { + /* Permission denied */ + entry.target_as = &s->blocked_rwx_as; + entry.perm = IOMMU_RW; + } else { + entry.target_as = &s->downstream_as; + if (s->rrid_transl_en) { + /* Indirectly access for rrid_transl */ + entry.target_as = &s->full_as; + } + entry.perm = iopmp_perm; + } + } else { + /* CPU access with IOMMU_NONE flag */ + if (iopmp_perm & IOPMP_XO) { + if ((iopmp_perm & IOPMP_RW) == IOPMP_RW) { + entry.target_as = &s->downstream_as; + if (s->rrid_transl_en) { + entry.target_as = &s->full_as; + } + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) { + entry.target_as = &s->blocked_w_as; + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) { + entry.target_as = &s->blocked_r_as; + } else { + entry.target_as = &s->blocked_rw_as; + } + } else { + if ((iopmp_perm & IOPMP_RW) == IOMMU_RW) { + entry.target_as = &s->blocked_x_as; + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_RO) { + entry.target_as = &s->blocked_wx_as; + } else if ((iopmp_perm & IOPMP_RW) == IOPMP_WO) { + entry.target_as = &s->blocked_rx_as; + } else { + entry.target_as = &s->blocked_rwx_as; + } + } + entry.perm = IOMMU_RW; + } + } else { + /* CPU access with IOMMU_NONE flag no_hit or par_hit */ + entry.target_as = &s->blocked_rwx_as; + entry.perm = IOMMU_RW; + } + return entry; +} + +static const MemoryRegionOps iopmp_ops = { + .read = iopmp_read, + .write = iopmp_write, + .endianness = DEVICE_NATIVE_ENDIAN, + .valid = {.min_access_size = 4, .max_access_size = 4} +}; + +static MemTxResult iopmp_permssion_write(void *opaque, hwaddr addr, + uint64_t value, unsigned size, + MemTxAttrs attrs) +{ + MemTxResult result; + int rrid = attrs.requester_id; + bool sent_info = false; + riscv_iopmp_txn_info signal; + RISCVIOPMPState *s = RISCV_IOPMP(opaque); + if (s->rrid_transl_en) { + if (s->transaction_state[rrid].running && s->send_ss) { + sent_info = true; + signal.rrid = s->rrid_transl; + signal.start_addr = s->transaction_state[rrid].start_addr; + signal.end_addr = s->transaction_state[rrid].end_addr; + signal.stage = s->transaction_state[rrid].stage + 1; + /* Send transaction information to next stage iopmp. */ + stream_push(s->send_ss, (uint8_t *)&signal, sizeof(signal), 0); + } + attrs.requester_id = s->rrid_transl; + } + result = address_space_write(&s->downstream_as, addr, attrs, &value, size); + if (sent_info) { + stream_push(s->send_ss, (uint8_t *)&signal, sizeof(signal), 1); + } + return result; +} + +static MemTxResult iopmp_permssion_read(void *opaque, hwaddr addr, + uint64_t *pdata, unsigned size, + MemTxAttrs attrs) +{ + MemTxResult result; + int rrid = attrs.requester_id; + bool sent_info = false; + riscv_iopmp_txn_info signal; + RISCVIOPMPState *s = RISCV_IOPMP(opaque); + if (s->rrid_transl_en) { + if (s->transaction_state[rrid].running && s->send_ss) { + sent_info = true; + signal.rrid = s->rrid_transl; + signal.start_addr = s->transaction_state[rrid].start_addr; + signal.end_addr = s->transaction_state[rrid].end_addr; + signal.stage = s->transaction_state[rrid].stage + 1; + /* Send transaction information to next stage iopmp. */ + stream_push(s->send_ss, (uint8_t *)&signal, sizeof(signal), 0); + } + attrs.requester_id = s->rrid_transl; + } + result = address_space_read(&s->downstream_as, addr, attrs, pdata, size); + if (sent_info) { + stream_push(s->send_ss, (uint8_t *)&signal, sizeof(signal), 1); + } + return result; +} + +static MemTxResult iopmp_handle_block(void *opaque, hwaddr addr, + uint64_t *data, unsigned size, + MemTxAttrs attrs, + iopmp_access_type access_type) +{ + RISCVIOPMPState *s = RISCV_IOPMP(opaque); + int entry_idx; + int rrid = attrs.requester_id; + int result; + hwaddr start_addr, end_addr; + iopmp_err_info_etype etype; + iopmp_err_info_ttype ttype; + ttype = access_type; + hwaddr iopmp_tlb_size = TARGET_PAGE_SIZE; + int match_cfg = 0; + /* unknown RRID */ + if (rrid >= s->rrid_num) { + etype = ERR_INFO_ETYPE_RRID; + return iopmp_error_reaction(s, rrid, 0, addr, etype, ttype, 0, data); + } + + if (s->is_stalled && s->rrid_stall[rrid]) { + etype = ERR_INFO_ETYPE_STALL; + return iopmp_error_reaction(s, rrid, 0, addr, etype, ttype, 0, data); + } + + if ((access_type == IOPMP_ACCESS_FETCH && s->no_x) || + (access_type == IOPMP_ACCESS_WRITE && s->no_w)) { + etype = ERR_INFO_ETYPE_NOHIT; + return iopmp_error_reaction(s, rrid, 0, addr, etype, ttype, 0, data); + } + + if (s->transaction_state[rrid].running == true) { + start_addr = s->transaction_state[rrid].start_addr; + end_addr = s->transaction_state[rrid].end_addr; + } else { + /* No transaction information, use the same address */ + start_addr = addr; + end_addr = addr; + } + + /* matching again to get eid */ + result = match_entry_srcmd(s, rrid, start_addr, end_addr, + &entry_idx, &match_cfg, + &iopmp_tlb_size); + if (result == IOPMP_ENTRY_HIT) { + etype = access_type; + } else if (result == IOPMP_ENTRY_PAR_HIT) { + etype = ERR_INFO_ETYPE_PARHIT; + /* error supperssion per entry is only for all-byte matched entry */ + } else { + etype = ERR_INFO_ETYPE_NOHIT; + entry_idx = 0; + } + return iopmp_error_reaction(s, rrid, entry_idx, start_addr, etype, ttype, + match_cfg, data); +} + +static MemTxResult iopmp_block_write(void *opaque, hwaddr addr, uint64_t value, + unsigned size, MemTxAttrs attrs) +{ + return iopmp_handle_block(opaque, addr, NULL, size, attrs, + IOPMP_ACCESS_WRITE); +} + +static MemTxResult iopmp_block_read(void *opaque, hwaddr addr, uint64_t *pdata, + unsigned size, MemTxAttrs attrs) +{ + return iopmp_handle_block(opaque, addr, pdata, size, attrs, + IOPMP_ACCESS_READ); +} + +static MemTxResult iopmp_block_fetch(void *opaque, hwaddr addr, uint64_t *pdata, + unsigned size, MemTxAttrs attrs) +{ + RISCVIOPMPState *s = RISCV_IOPMP(opaque); + if (s->chk_x) { + return iopmp_handle_block(opaque, addr, pdata, size, attrs, + IOPMP_ACCESS_FETCH); + } + /* Using read reaction for no chk_x */ + return iopmp_handle_block(opaque, addr, pdata, size, attrs, + IOPMP_ACCESS_READ); +} + +static const MemoryRegionOps iopmp_block_rw_ops = { + .fetch_with_attrs = iopmp_permssion_read, + .read_with_attrs = iopmp_block_read, + .write_with_attrs = iopmp_block_write, + .endianness = DEVICE_NATIVE_ENDIAN, + .valid = {.min_access_size = 1, .max_access_size = 8}, +}; + +static const MemoryRegionOps iopmp_block_w_ops = { + .fetch_with_attrs = iopmp_permssion_read, + .read_with_attrs = iopmp_permssion_read, + .write_with_attrs = iopmp_block_write, + .endianness = DEVICE_NATIVE_ENDIAN, + .valid = {.min_access_size = 1, .max_access_size = 8}, +}; + +static const MemoryRegionOps iopmp_block_r_ops = { + .fetch_with_attrs = iopmp_permssion_read, + .read_with_attrs = iopmp_block_read, + .write_with_attrs = iopmp_permssion_write, + .endianness = DEVICE_NATIVE_ENDIAN, + .valid = {.min_access_size = 1, .max_access_size = 8}, +}; + +static const MemoryRegionOps iopmp_block_rwx_ops = { + .fetch_with_attrs = iopmp_block_fetch, + .read_with_attrs = iopmp_block_read, + .write_with_attrs = iopmp_block_write, + .endianness = DEVICE_NATIVE_ENDIAN, + .valid = {.min_access_size = 1, .max_access_size = 8}, +}; + +static const MemoryRegionOps iopmp_block_wx_ops = { + .fetch_with_attrs = iopmp_block_fetch, + .read_with_attrs = iopmp_permssion_read, + .write_with_attrs = iopmp_block_write, + .endianness = DEVICE_NATIVE_ENDIAN, + .valid = {.min_access_size = 1, .max_access_size = 8}, +}; + +static const MemoryRegionOps iopmp_block_rx_ops = { + .fetch_with_attrs = iopmp_block_fetch, + .read_with_attrs = iopmp_block_read, + .write_with_attrs = iopmp_permssion_write, + .endianness = DEVICE_NATIVE_ENDIAN, + .valid = {.min_access_size = 1, .max_access_size = 8}, +}; + +static const MemoryRegionOps iopmp_block_x_ops = { + .fetch_with_attrs = iopmp_block_fetch, + .read_with_attrs = iopmp_permssion_read, + .write_with_attrs = iopmp_permssion_write, + .endianness = DEVICE_NATIVE_ENDIAN, + .valid = {.min_access_size = 1, .max_access_size = 8}, +}; + +static const MemoryRegionOps iopmp_full_ops = { + .fetch_with_attrs = iopmp_permssion_read, + .read_with_attrs = iopmp_permssion_read, + .write_with_attrs = iopmp_permssion_write, + .endianness = DEVICE_NATIVE_ENDIAN, + .valid = {.min_access_size = 1, .max_access_size = 8}, +}; + +static void iopmp_realize(DeviceState *dev, Error **errp) +{ + Object *obj = OBJECT(dev); + SysBusDevice *sbd = SYS_BUS_DEVICE(dev); + RISCVIOPMPState *s = RISCV_IOPMP(dev); + uint64_t size; + + size = -1ULL; + + if (s->srcmd_fmt > 2) { + error_setg(errp, "Invalid IOPMP srcmd_fmt"); + error_append_hint(errp, "Valid values are 0, 1, and 2.\n"); + return; + } + + if (s->mdcfg_fmt > 2) { + error_setg(errp, "Invalid IOPMP mdcfg_fmt"); + error_append_hint(errp, "Valid values are 0, 1, and 2.\n"); + return; + } + + if (s->srcmd_fmt != 0) { + /* SPS is only supported in srcmd_fmt0 */ + s->sps_en = false; + } + + s->md_num = MIN(s->md_num, IOPMP_MAX_MD_NUM); + if (s->srcmd_fmt == 1) { + /* Each RRID has one MD */ + s->md_num = MIN(s->md_num, s->rrid_num); + } + s->md_entry_num = s->default_md_entry_num; + /* If md_entry_num is fixed, entry_num = md_num * (md_entry_num + 1)*/ + if (s->mdcfg_fmt == 1) { + s->entry_num = s->md_num * (s->md_entry_num + 1); + } + + s->prient_prog = s->default_prient_prog; + if (s->srcmd_fmt == 0) { + s->rrid_num = MIN(s->rrid_num, IOPMP_SRCMDFMT0_MAX_RRID_NUM); + } else if (s->srcmd_fmt == 1) { + s->rrid_num = MIN(s->rrid_num, s->md_num); + } else { + s->rrid_num = MIN(s->rrid_num, IOPMP_SRCMDFMT2_MAX_RRID_NUM); + } + s->prio_entry = MIN(s->default_prio_entry, s->entry_num); + s->rrid_transl_prog = s->default_rrid_transl_prog; + s->rrid_transl = s->default_rrid_transl; + + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, MSI_EN, + s->default_msi_en); + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, MSIDATA, + s->default_msidata); + s->regs.err_msiaddr = s->default_err_msiaddr; + s->regs.err_msiaddrh = s->default_err_msiaddrh; + + s->regs.mdcfg = g_malloc0(s->md_num * sizeof(uint32_t)); + if (s->srcmd_fmt != 2) { + s->regs.srcmd_en = g_malloc0(s->rrid_num * sizeof(uint32_t)); + s->regs.srcmd_enh = g_malloc0(s->rrid_num * sizeof(uint32_t)); + } else { + /* srcmd_perm */ + s->regs.srcmd_perm = g_malloc0(s->md_num * sizeof(uint32_t)); + s->regs.srcmd_permh = g_malloc0(s->md_num * sizeof(uint32_t)); + } + + if (s->sps_en) { + s->regs.srcmd_r = g_malloc0(s->rrid_num * sizeof(uint32_t)); + s->regs.srcmd_rh = g_malloc0(s->rrid_num * sizeof(uint32_t)); + s->regs.srcmd_w = g_malloc0(s->rrid_num * sizeof(uint32_t)); + s->regs.srcmd_wh = g_malloc0(s->rrid_num * sizeof(uint32_t)); + } + + if (s->stall_en) { + s->rrid_stall = g_malloc0(s->rrid_num * sizeof(bool)); + } + + if (s->mfr_en) { + s->svw = g_malloc0((s->rrid_num / 16 + 1) * sizeof(uint16_t)); + } + + s->regs.entry = g_malloc0(s->entry_num * sizeof(riscv_iopmp_entry_t)); + s->entry_addr = g_malloc0(s->entry_num * sizeof(riscv_iopmp_addr_t)); + s->transaction_state = g_malloc0(s->rrid_num * + sizeof(riscv_iopmp_transaction_state)); + qemu_mutex_init(&s->iopmp_transaction_mutex); + + memory_region_init_iommu(&s->iommu, sizeof(s->iommu), + TYPE_RISCV_IOPMP_IOMMU_MEMORY_REGION, + obj, "riscv-iopmp-sysbus-iommu", UINT64_MAX); + memory_region_init_io(&s->mmio, obj, &iopmp_ops, + s, "riscv-iopmp-regs", 0x100000); + sysbus_init_mmio(sbd, &s->mmio); + + memory_region_init_io(&s->blocked_rw, NULL, &iopmp_block_rw_ops, s, + "riscv-iopmp-blocked-rw", size); + memory_region_init_io(&s->blocked_w, NULL, &iopmp_block_w_ops, s, + "riscv-iopmp-blocked-w", size); + memory_region_init_io(&s->blocked_r, NULL, &iopmp_block_r_ops, s, + "riscv-iopmp-blocked-r", size); + memory_region_init_io(&s->blocked_rwx, NULL, &iopmp_block_rwx_ops, s, + "riscv-iopmp-blocked-rwx", size); + memory_region_init_io(&s->blocked_wx, NULL, &iopmp_block_wx_ops, s, + "riscv-iopmp-blocked-wx", size); + memory_region_init_io(&s->blocked_rx, NULL, &iopmp_block_rx_ops, s, + "riscv-iopmp-blocked-rx", size); + memory_region_init_io(&s->blocked_x, NULL, &iopmp_block_x_ops, s, + "riscv-iopmp-blocked-x", size); + memory_region_init_io(&s->full_mr, NULL, &iopmp_full_ops, s, + "riscv-iopmp-full", size); + + address_space_init(&s->blocked_rw_as, &s->blocked_rw, + "riscv-iopmp-blocked-rw-as"); + address_space_init(&s->blocked_w_as, &s->blocked_w, + "riscv-iopmp-blocked-w-as"); + address_space_init(&s->blocked_r_as, &s->blocked_r, + "riscv-iopmp-blocked-r-as"); + address_space_init(&s->blocked_rwx_as, &s->blocked_rwx, + "riscv-iopmp-blocked-rwx-as"); + address_space_init(&s->blocked_wx_as, &s->blocked_wx, + "riscv-iopmp-blocked-wx-as"); + address_space_init(&s->blocked_rx_as, &s->blocked_rx, + "riscv-iopmp-blocked-rx-as"); + address_space_init(&s->blocked_x_as, &s->blocked_x, + "riscv-iopmp-blocked-x-as"); + address_space_init(&s->full_as, &s->full_mr, "riscv-iopmp-full-as"); + + object_initialize_child(OBJECT(s), "riscv_iopmp_streamsink", + &s->txn_info_sink, + TYPE_RISCV_IOPMP_STREAMSINK); +} + +static void iopmp_reset_enter(Object *obj, ResetType type) +{ + RISCVIOPMPState *s = RISCV_IOPMP(obj); + + qemu_set_irq(s->irq, 0); + if (s->srcmd_fmt != 2) { + memset(s->regs.srcmd_en, 0, s->rrid_num * sizeof(uint32_t)); + memset(s->regs.srcmd_enh, 0, s->rrid_num * sizeof(uint32_t)); + } else { + memset(s->regs.srcmd_en, 0, s->md_num * sizeof(uint32_t)); + memset(s->regs.srcmd_enh, 0, s->md_num * sizeof(uint32_t)); + } + + if (s->sps_en) { + memset(s->regs.srcmd_r, 0, s->rrid_num * sizeof(uint32_t)); + memset(s->regs.srcmd_rh, 0, s->rrid_num * sizeof(uint32_t)); + memset(s->regs.srcmd_w, 0, s->rrid_num * sizeof(uint32_t)); + memset(s->regs.srcmd_wh, 0, s->rrid_num * sizeof(uint32_t)); + } + + if (s->stall_en) { + memset((void *)s->rrid_stall, 0, s->rrid_num * sizeof(bool)); + s->is_stalled = 0; + } + + if (s->mfr_en) { + memset(s->svw, 0, (s->rrid_num / 16 + 1) * sizeof(uint16_t)); + } + + memset(s->regs.entry, 0, s->entry_num * sizeof(riscv_iopmp_entry_t)); + memset(s->entry_addr, 0, s->entry_num * sizeof(riscv_iopmp_addr_t)); + memset(s->transaction_state, 0, + s->rrid_num * sizeof(riscv_iopmp_transaction_state)); + + s->regs.mdlck = 0; + s->regs.mdlckh = 0; + s->regs.entrylck = 0; + s->regs.mdcfglck = 0; + s->regs.mdstall = 0; + s->regs.mdstallh = 0; + s->regs.rridscp = 0; + s->regs.err_cfg = 0; + s->regs.err_reqaddr = 0; + s->regs.err_reqid = 0; + s->regs.err_info = 0; + + s->prient_prog = s->default_prient_prog; + s->rrid_transl_prog = s->default_rrid_transl_prog; + s->md_entry_num = s->default_md_entry_num; + s->rrid_transl = s->default_rrid_transl; + s->prio_entry = MIN(s->default_prio_entry, s->entry_num); + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, MSI_EN, + s->default_msi_en); + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, STALL_VIOLATION_EN, + s->default_stall_violation_en); + s->regs.err_cfg = FIELD_DP32(s->regs.err_cfg, ERR_CFG, MSIDATA, + s->default_msidata); + s->regs.err_msiaddr = s->default_err_msiaddr; + s->regs.err_msiaddrh = s->default_err_msiaddrh; + s->enable = 0; +} + +static void iopmp_reset_hold(Object *obj, ResetType type) +{ + RISCVIOPMPState *s = RISCV_IOPMP(obj); + + qemu_set_irq(s->irq, 0); +} + +static int iopmp_attrs_to_index(IOMMUMemoryRegion *iommu, MemTxAttrs attrs) +{ + return attrs.requester_id; +} + +static void iopmp_iommu_memory_region_class_init(ObjectClass *klass, void *data) +{ + IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass); + + imrc->translate = iopmp_translate; + imrc->attrs_to_index = iopmp_attrs_to_index; +} + +static const Property iopmp_property[] = { + DEFINE_PROP_UINT32("mdcfg_fmt", RISCVIOPMPState, mdcfg_fmt, 1), + DEFINE_PROP_UINT32("srcmd_fmt", RISCVIOPMPState, srcmd_fmt, 0), + DEFINE_PROP_BOOL("tor_en", RISCVIOPMPState, tor_en, true), + DEFINE_PROP_BOOL("sps_en", RISCVIOPMPState, sps_en, false), + DEFINE_PROP_BOOL("prient_prog", RISCVIOPMPState, default_prient_prog, true), + DEFINE_PROP_BOOL("rrid_transl_en", RISCVIOPMPState, rrid_transl_en, false), + DEFINE_PROP_BOOL("rrid_transl_prog", RISCVIOPMPState, + default_rrid_transl_prog, false), + DEFINE_PROP_BOOL("chk_x", RISCVIOPMPState, chk_x, true), + DEFINE_PROP_BOOL("no_x", RISCVIOPMPState, no_x, false), + DEFINE_PROP_BOOL("no_w", RISCVIOPMPState, no_w, false), + DEFINE_PROP_BOOL("stall_en", RISCVIOPMPState, stall_en, false), + DEFINE_PROP_BOOL("peis", RISCVIOPMPState, peis, true), + DEFINE_PROP_BOOL("pees", RISCVIOPMPState, pees, true), + DEFINE_PROP_BOOL("mfr_en", RISCVIOPMPState, mfr_en, true), + DEFINE_PROP_UINT32("md_entry_num", RISCVIOPMPState, default_md_entry_num, + 5), + DEFINE_PROP_UINT32("md_num", RISCVIOPMPState, md_num, 8), + DEFINE_PROP_UINT32("rrid_num", RISCVIOPMPState, rrid_num, 16), + DEFINE_PROP_UINT32("entry_num", RISCVIOPMPState, entry_num, 48), + DEFINE_PROP_UINT32("prio_entry", RISCVIOPMPState, default_prio_entry, + 65535), + DEFINE_PROP_UINT32("rrid_transl", RISCVIOPMPState, default_rrid_transl, + 0x0), + DEFINE_PROP_INT32("entry_offset", RISCVIOPMPState, entry_offset, 0x4000), + DEFINE_PROP_UINT32("err_rdata", RISCVIOPMPState, err_rdata, 0x0), + DEFINE_PROP_BOOL("msi_en", RISCVIOPMPState, default_msi_en, false), + DEFINE_PROP_UINT32("msidata", RISCVIOPMPState, default_msidata, 12), + DEFINE_PROP_BOOL("stall_violation_en", RISCVIOPMPState, + default_stall_violation_en, true), + DEFINE_PROP_UINT32("err_msiaddr", RISCVIOPMPState, default_err_msiaddr, + 0x24000000), + DEFINE_PROP_UINT32("err_msiaddrh", RISCVIOPMPState, default_err_msiaddrh, + 0x0), + DEFINE_PROP_UINT32("msi_rrid", RISCVIOPMPState, msi_rrid, 0), +}; + +static void iopmp_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(klass); + ResettableClass *rc = RESETTABLE_CLASS(klass); + device_class_set_props(dc, iopmp_property); + dc->realize = iopmp_realize; + rc->phases.enter = iopmp_reset_enter; + rc->phases.hold = iopmp_reset_hold; +} + +static void iopmp_init(Object *obj) +{ + RISCVIOPMPState *s = RISCV_IOPMP(obj); + SysBusDevice *sbd = SYS_BUS_DEVICE(obj); + + sysbus_init_irq(sbd, &s->irq); +} + +static const TypeInfo iopmp_info = { + .name = TYPE_RISCV_IOPMP, + .parent = TYPE_SYS_BUS_DEVICE, + .instance_size = sizeof(RISCVIOPMPState), + .instance_init = iopmp_init, + .class_init = iopmp_class_init, +}; + +static const TypeInfo iopmp_iommu_memory_region_info = { + .name = TYPE_RISCV_IOPMP_IOMMU_MEMORY_REGION, + .parent = TYPE_IOMMU_MEMORY_REGION, + .class_init = iopmp_iommu_memory_region_class_init, +}; + +DeviceState *iopmp_create(hwaddr addr, qemu_irq irq) +{ + DeviceState *dev = qdev_new(TYPE_RISCV_IOPMP); + sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, irq); + sysbus_realize_and_unref(SYS_BUS_DEVICE(dev), &error_fatal); + sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr); + return dev; +} + +/* + * Alias subregions from the source memory region to the destination memory + * region + */ +static void alias_memory_subregions(MemoryRegion *src_mr, MemoryRegion *dst_mr) +{ + int32_t priority; + hwaddr addr; + MemoryRegion *alias, *subregion; + QTAILQ_FOREACH(subregion, &src_mr->subregions, subregions_link) { + priority = subregion->priority; + addr = subregion->addr; + alias = g_malloc0(sizeof(MemoryRegion)); + memory_region_init_alias(alias, NULL, subregion->name, subregion, 0, + memory_region_size(subregion)); + memory_region_add_subregion_overlap(dst_mr, addr, alias, priority); + } +} + +/* + * Create downstream of system memory for IOPMP, and overlap memory region + * specified in memmap with IOPMP translator. Make sure subregions are added to + * system memory before call this function. It also add entry to + * iopmp_protection_memmaps for recording the relationship between physical + * address regions and IOPMP. + */ +void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap, + uint32_t map_entry_num, uint32_t stage) +{ + RISCVIOPMPState *s = RISCV_IOPMP(dev); + uint32_t i; + MemoryRegion *iommu_alias; + MemoryRegion *target_mr = get_system_memory(); + MemoryRegion *downstream = g_malloc0(sizeof(MemoryRegion)); + memory_region_init(downstream, NULL, "iopmp_downstream", + memory_region_size(target_mr)); + /* Create a downstream which does not have iommu of iopmp */ + alias_memory_subregions(target_mr, downstream); + + for (i = 0; i < map_entry_num; i++) { + /* Memory access to protected regions of target are through IOPMP */ + iommu_alias = g_new(MemoryRegion, 1); + memory_region_init_alias(iommu_alias, NULL, "iommu_alias", + MEMORY_REGION(&s->iommu), memmap[i].base, + memmap[i].size); + memory_region_add_subregion_overlap(target_mr, memmap[i].base, + iommu_alias, 1); + } + s->downstream = downstream; + address_space_init(&s->downstream_as, s->downstream, + "riscv-iopmp-downstream-as"); +} + +static size_t txn_info_push(StreamSink *txn_info_sink, unsigned char *buf, + size_t len, bool eop) +{ + riscv_iopmp_streamsink *ss = RISCV_IOPMP_STREAMSINK(txn_info_sink); + RISCVIOPMPState *s = RISCV_IOPMP(container_of(ss, RISCVIOPMPState, + txn_info_sink)); + riscv_iopmp_txn_info signal; + uint32_t rrid; + + memcpy(&signal, buf, len); + rrid = signal.rrid; + + if (s->transaction_state[rrid].running) { + if (eop) { + /* Finish the transaction */ + qemu_mutex_lock(&s->iopmp_transaction_mutex); + s->transaction_state[rrid].running = 0; + qemu_mutex_unlock(&s->iopmp_transaction_mutex); + return 1; + } else { + /* Transaction is already running */ + return 0; + } + } else if (len == sizeof(riscv_iopmp_txn_info)) { + /* Get the transaction info */ + s->transaction_state[rrid].supported = 1; + qemu_mutex_lock(&s->iopmp_transaction_mutex); + s->transaction_state[rrid].running = 1; + qemu_mutex_unlock(&s->iopmp_transaction_mutex); + + s->transaction_state[rrid].start_addr = signal.start_addr; + s->transaction_state[rrid].end_addr = signal.end_addr; + s->transaction_state[rrid].error_reported = 0; + s->transaction_state[rrid].stage = signal.stage; + return 1; + } + return 0; +} + +void iopmp_setup_sink(DeviceState *dev, StreamSink * ss) +{ + RISCVIOPMPState *s = RISCV_IOPMP(dev); + s->send_ss = ss; +} + +static void riscv_iopmp_streamsink_class_init(ObjectClass *klass, void *data) +{ + StreamSinkClass *ssc = STREAM_SINK_CLASS(klass); + ssc->push = txn_info_push; +} + +static const TypeInfo txn_info_sink = { + .name = TYPE_RISCV_IOPMP_STREAMSINK, + .parent = TYPE_OBJECT, + .instance_size = sizeof(riscv_iopmp_streamsink), + .class_init = riscv_iopmp_streamsink_class_init, + .interfaces = (InterfaceInfo[]) { + { TYPE_STREAM_SINK }, + { } + }, +}; + +static void iopmp_register_types(void) +{ + type_register_static(&iopmp_info); + type_register_static(&txn_info_sink); + type_register_static(&iopmp_iommu_memory_region_info); +} + +type_init(iopmp_register_types); diff --git a/hw/misc/trace-events b/hw/misc/trace-events index 0f5d2b5666..965bf7ffc6 100644 --- a/hw/misc/trace-events +++ b/hw/misc/trace-events @@ -384,3 +384,7 @@ ivshmem_flat_read_write_mmr_invalid(uint64_t addr_offset) "No ivshmem register m ivshmem_flat_interrupt_invalid_peer(uint16_t peer_id) "Can't interrupt non-existing peer %u" ivshmem_flat_write_mmr(uint64_t addr_offset) "Write access at offset %"PRIu64 ivshmem_flat_interrupt_peer(uint16_t peer_id, uint16_t vector_id) "Interrupting peer ID %u, vector %u..." + +# riscv_iopmp.c +iopmp_read(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x" +iopmp_write(uint64_t addr, uint32_t val) "addr 0x%"PRIx64" val 0x%x" diff --git a/include/hw/misc/riscv_iopmp.h b/include/hw/misc/riscv_iopmp.h new file mode 100644 index 0000000000..18e3afa252 --- /dev/null +++ b/include/hw/misc/riscv_iopmp.h @@ -0,0 +1,191 @@ +/* + * QEMU RISC-V IOPMP (Input Output Physical Memory Protection) + * + * Copyright (c) 2023-2025 Andes Tech. Corp. + * + * SPDX-License-Identifier: GPL-2.0-or-later + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2 or later, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see . + */ + +#ifndef RISCV_IOPMP_H +#define RISCV_IOPMP_H + +#include "hw/sysbus.h" +#include "qemu/typedefs.h" +#include "memory.h" +#include "exec/hwaddr.h" +#include "hw/stream.h" + +#define TYPE_RISCV_IOPMP "riscv-iopmp" +OBJECT_DECLARE_SIMPLE_TYPE(RISCVIOPMPState, RISCV_IOPMP) + +typedef struct riscv_iopmp_streamsink { + Object parent; +} riscv_iopmp_streamsink; +#define TYPE_RISCV_IOPMP_STREAMSINK \ + "riscv-iopmp-streamsink" +DECLARE_INSTANCE_CHECKER(riscv_iopmp_streamsink, RISCV_IOPMP_STREAMSINK, + TYPE_RISCV_IOPMP_STREAMSINK) +typedef struct { + uint32_t addr_reg; + uint32_t addrh_reg; + uint32_t cfg_reg; +} riscv_iopmp_entry_t; + +typedef struct { + uint64_t sa; + uint64_t ea; +} riscv_iopmp_addr_t; + +typedef struct { + union { + uint32_t *srcmd_en; + uint32_t *srcmd_perm; + }; + union { + uint32_t *srcmd_enh; + uint32_t *srcmd_permh; + }; + uint32_t *srcmd_r; + uint32_t *srcmd_rh; + uint32_t *srcmd_w; + uint32_t *srcmd_wh; + uint32_t *mdcfg; + riscv_iopmp_entry_t *entry; + uint32_t mdlck; + uint32_t mdlckh; + uint32_t entrylck; + uint32_t mdcfglck; + uint32_t mdstall; + uint32_t mdstallh; + uint32_t rridscp; + uint32_t err_cfg; + uint64_t err_reqaddr; + uint32_t err_reqid; + uint32_t err_info; + uint32_t err_msiaddr; + uint32_t err_msiaddrh; +} riscv_iopmp_regs; + +/* To detect partially hit */ +typedef struct riscv_iopmp_transaction_state { + bool running; + bool error_reported; + bool supported; + uint32_t stage; + hwaddr start_addr; + hwaddr end_addr; +} riscv_iopmp_transaction_state; + +typedef struct RISCVIOPMPState { + SysBusDevice parent_obj; + riscv_iopmp_addr_t *entry_addr; + MemoryRegion mmio; + IOMMUMemoryRegion iommu; + riscv_iopmp_regs regs; + MemoryRegion *downstream; + MemoryRegion blocked_r, blocked_w, blocked_x, blocked_rw, blocked_rx, + blocked_wx, blocked_rwx; + MemoryRegion full_mr; + + AddressSpace downstream_as; + AddressSpace blocked_r_as, blocked_w_as, blocked_x_as, blocked_rw_as, + blocked_rx_as, blocked_wx_as, blocked_rwx_as; + AddressSpace full_as; + qemu_irq irq; + + /* Transaction(txn) information to identify whole transaction length */ + /* Receive txn info */ + riscv_iopmp_streamsink txn_info_sink; + /* Send txn info for next stage iopmp */ + StreamSink *send_ss; + riscv_iopmp_transaction_state *transaction_state; + QemuMutex iopmp_transaction_mutex; + + /* + * Stall: + * a while loop to check stall flags if stall_violation is not enabled + */ + volatile bool is_stalled; + volatile bool *rrid_stall; + + /* MFR extenstion */ + uint16_t *svw; + uint16_t svi; + + /* Properties */ + /* + * MDCFG Format 0: MDCFG table is implemented + * 1: HWCFG.md_entry_num is fixed + * 2: HWCFG.md_entry_num is programmable + */ + uint32_t mdcfg_fmt; + /* + * SRCMD Format 0: SRCMD_EN is implemented + * 1: 1 to 1 SRCMD mapping + * 2: SRCMD_PERM is implemented + */ + uint32_t srcmd_fmt; + bool tor_en; + /* SPS is only supported srcmd_fmt0 */ + bool sps_en; + /* Indicate prio_entry is programmable or not */ + bool default_prient_prog; + bool rrid_transl_en; + bool default_rrid_transl_prog; + bool chk_x; + bool no_x; + bool no_w; + bool stall_en; + bool default_stall_violation_en; + bool peis; + bool pees; + bool mfr_en; + /* Indicate md_entry_num for mdcfg_fmt1/2 */ + uint32_t default_md_entry_num; + uint32_t md_num; + uint32_t rrid_num; + uint32_t entry_num; + /* Indicate number of priority entry */ + uint32_t default_prio_entry; + uint32_t default_rrid_transl; + /* MSI */ + bool default_msi_en; + uint32_t default_msidata; + uint32_t default_err_msiaddr; + uint32_t default_err_msiaddrh; + uint32_t msi_rrid; + /* Note: entry_offset < 0 is not support in QEMU */ + int32_t entry_offset; + /* + * Data value to be returned for all read accesses that violate the security + * check + */ + uint32_t err_rdata; + + /* Current status for programmable parameters */ + bool prient_prog; + bool rrid_transl_prog; + uint32_t md_entry_num; + uint32_t prio_entry; + uint32_t rrid_transl; + bool enable; +} RISCVIOPMPState; + +DeviceState *iopmp_create(hwaddr addr, qemu_irq irq); +void iopmp_setup_system_memory(DeviceState *dev, const MemMapEntry *memmap, + uint32_t mapentry_num, uint32_t stage); +void iopmp_setup_sink(DeviceState *dev, StreamSink * ss); + +#endif From patchwork Thu Jan 9 02:44:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ethan Chen X-Patchwork-Id: 13931900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A8CDE7719A for ; Thu, 9 Jan 2025 03:14:32 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tVizD-0000v3-68; Wed, 08 Jan 2025 22:13:31 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyy-0000jR-5c; Wed, 08 Jan 2025 22:13:16 -0500 Received: from 60-248-80-70.hinet-ip.hinet.net ([60.248.80.70] helo=Atcsqr.andestech.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyv-0001NX-N3; Wed, 08 Jan 2025 22:13:15 -0500 Received: from Atcsqr.andestech.com (localhost [127.0.0.2] (may be forged)) by Atcsqr.andestech.com with ESMTP id 5092jpqw029104; Thu, 9 Jan 2025 10:45:51 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from mail.andestech.com (ATCPCS31.andestech.com [10.0.1.89]) by Atcsqr.andestech.com with ESMTPS id 5092j2Y8027864 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 9 Jan 2025 10:45:02 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from atcpcw16.andestech.com (10.0.1.106) by ATCPCS31.andestech.com (10.0.1.89) with Microsoft SMTP Server (TLS) id 14.3.498.0; Thu, 9 Jan 2025 10:45:02 +0800 To: CC: , , , , , , , , , , , , Ethan Chen Subject: [PATCH v9 7/8] hw/misc/riscv_iopmp_dispatcher: Device for redirect IOPMP transaction infomation Date: Thu, 9 Jan 2025 10:44:40 +0800 Message-ID: <20250109024441.3283671-8-ethan84@andestech.com> X-Mailer: git-send-email 2.42.0.345.gaab89be2eb.dirty In-Reply-To: <20250109024441.3283671-1-ethan84@andestech.com> References: <20250109024441.3283671-1-ethan84@andestech.com> MIME-Version: 1.0 X-Originating-IP: [10.0.1.106] X-DKIM-Results: atcpcs31.andestech.com; dkim=none; X-DNSRBL: X-MAIL: Atcsqr.andestech.com 5092jpqw029104 Received-SPF: pass client-ip=60.248.80.70; envelope-from=ethan84@andestech.com; helo=Atcsqr.andestech.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, RDNS_DYNAMIC=0.982, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TVD_RCVD_IP=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Ethan Chen X-Patchwork-Original-From: Ethan Chen via From: Ethan Chen Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org This device determines the target IOPMP device for forwarding information based on: * Address: For parallel IOPMP devices * Stage: For cascading IOPMP devices Signed-off-by: Ethan Chen --- hw/misc/meson.build | 1 + hw/misc/riscv_iopmp_dispatcher.c | 136 +++++++++++++++++++++++ include/hw/misc/riscv_iopmp_dispatcher.h | 61 ++++++++++ 3 files changed, 198 insertions(+) create mode 100644 hw/misc/riscv_iopmp_dispatcher.c create mode 100644 include/hw/misc/riscv_iopmp_dispatcher.h diff --git a/hw/misc/meson.build b/hw/misc/meson.build index 88f2bb6b88..497f83637f 100644 --- a/hw/misc/meson.build +++ b/hw/misc/meson.build @@ -35,6 +35,7 @@ system_ss.add(when: 'CONFIG_SIFIVE_E_AON', if_true: files('sifive_e_aon.c')) system_ss.add(when: 'CONFIG_SIFIVE_U_OTP', if_true: files('sifive_u_otp.c')) system_ss.add(when: 'CONFIG_SIFIVE_U_PRCI', if_true: files('sifive_u_prci.c')) specific_ss.add(when: 'CONFIG_RISCV_IOPMP', if_true: files('riscv_iopmp.c')) +specific_ss.add(when: 'CONFIG_RISCV_IOPMP', if_true: files('riscv_iopmp_dispatcher.c')) subdir('macio') diff --git a/hw/misc/riscv_iopmp_dispatcher.c b/hw/misc/riscv_iopmp_dispatcher.c new file mode 100644 index 0000000000..ba6eaeb396 --- /dev/null +++ b/hw/misc/riscv_iopmp_dispatcher.c @@ -0,0 +1,136 @@ +/* + * QEMU RISC-V IOPMP dispatcher + * + * Receives transaction information from the requestor and forwards it to the + * corresponding IOPMP device. + * + * Copyright (c) 2023-2025 Andes Tech. Corp. + * + * SPDX-License-Identifier: GPL-2.0-or-later + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2 or later, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see . + */ + +#include "qemu/osdep.h" +#include "qemu/log.h" +#include "qapi/error.h" +#include "trace.h" +#include "exec/exec-all.h" +#include "exec/address-spaces.h" +#include "hw/qdev-properties.h" +#include "hw/sysbus.h" +#include "hw/misc/riscv_iopmp_dispatcher.h" +#include "memory.h" +#include "hw/irq.h" +#include "hw/misc/riscv_iopmp_txn_info.h" + +static void riscv_iopmp_dispatcher_realize(DeviceState *dev, Error **errp) +{ + int i; + RISCVIOPMPDispState *s = RISCV_IOPMP_DISP(dev); + + s->SinkMemMap = g_new(SinkMemMapEntry *, s->stage_num); + for (i = 0; i < s->stage_num; i++) { + s->SinkMemMap[i] = g_new(SinkMemMapEntry, s->target_num); + } + + object_initialize_child(OBJECT(s), "iopmp_dispatcher_txn_info", + &s->txn_info_sink, + TYPE_RISCV_IOPMP_DISP_SS); +} + +static Property iopmp_dispatcher_properties[] = { + DEFINE_PROP_UINT32("stage-num", RISCVIOPMPDispState, stage_num, 2), + DEFINE_PROP_UINT32("target-num", RISCVIOPMPDispState, target_num, 10), +}; + +static void riscv_iopmp_dispatcher_class_init(ObjectClass *klass, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(klass); + device_class_set_props(dc, iopmp_dispatcher_properties); + dc->realize = riscv_iopmp_dispatcher_realize; +} + +static const TypeInfo riscv_iopmp_dispatcher_info = { + .name = TYPE_RISCV_IOPMP_DISP, + .parent = TYPE_DEVICE, + .instance_size = sizeof(RISCVIOPMPDispState), + .class_init = riscv_iopmp_dispatcher_class_init, +}; + +static size_t dispatcher_txn_info_push(StreamSink *txn_info_sink, + unsigned char *buf, + size_t len, bool eop) +{ + uint64_t addr; + uint32_t stage; + int i, j; + riscv_iopmp_disp_ss *ss = + RISCV_IOPMP_DISP_SS(txn_info_sink); + RISCVIOPMPDispState *s = RISCV_IOPMP_DISP(container_of(ss, + RISCVIOPMPDispState, txn_info_sink)); + riscv_iopmp_txn_info signal; + memcpy(&signal, buf, len); + addr = signal.start_addr; + stage = signal.stage; + for (i = stage; i < s->stage_num; i++) { + for (j = 0; j < s->target_num; j++) { + if (s->SinkMemMap[i][j].map.base <= addr && + addr < s->SinkMemMap[i][j].map.base + + s->SinkMemMap[i][j].map.size) { + return stream_push(s->SinkMemMap[i][j].sink, buf, len, eop); + } + } + } + /* Always pass if target is not protected by IOPMP*/ + return 1; +} + +static void riscv_iopmp_disp_ss_class_init( + ObjectClass *klass, void *data) +{ + StreamSinkClass *ssc = STREAM_SINK_CLASS(klass); + ssc->push = dispatcher_txn_info_push; +} + +static const TypeInfo riscv_iopmp_disp_ss_info = { + .name = TYPE_RISCV_IOPMP_DISP_SS, + .parent = TYPE_OBJECT, + .instance_size = sizeof(riscv_iopmp_disp_ss), + .class_init = riscv_iopmp_disp_ss_class_init, + .interfaces = (InterfaceInfo[]) { + { TYPE_STREAM_SINK }, + { } + }, +}; + +void iopmp_dispatcher_add_target(DeviceState *dev, StreamSink *sink, + uint64_t base, uint64_t size, uint32_t stage, uint32_t id) +{ + RISCVIOPMPDispState *s = RISCV_IOPMP_DISP(dev); + if (stage < s->stage_num && id < s->target_num) { + s->SinkMemMap[stage][id].map.base = base; + s->SinkMemMap[stage][id].map.size = size; + s->SinkMemMap[stage][id].sink = sink; + } +} + +static void +iopmp_dispatcher_register_types(void) +{ + type_register_static(&riscv_iopmp_dispatcher_info); + type_register_static(&riscv_iopmp_disp_ss_info); +} + +type_init(iopmp_dispatcher_register_types); + diff --git a/include/hw/misc/riscv_iopmp_dispatcher.h b/include/hw/misc/riscv_iopmp_dispatcher.h new file mode 100644 index 0000000000..bbaa76507b --- /dev/null +++ b/include/hw/misc/riscv_iopmp_dispatcher.h @@ -0,0 +1,61 @@ +/* + * QEMU RISC-V IOPMP dispatcher + * + * Receives transaction information from the requestor and forwards it to the + * corresponding IOPMP device. + * + * Copyright (c) 2023-2024 Andes Tech. Corp. + * + * SPDX-License-Identifier: GPL-2.0-or-later + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2 or later, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see . + */ + +#ifndef RISCV_IOPMP_DISPATCHER_H +#define RISCV_IOPMP_DISPATCHER_H + +#include "hw/sysbus.h" +#include "qemu/typedefs.h" +#include "memory.h" +#include "hw/stream.h" +#include "hw/misc/riscv_iopmp_txn_info.h" +#include "exec/hwaddr.h" + +#define TYPE_RISCV_IOPMP_DISP "riscv-iopmp-dispatcher" +OBJECT_DECLARE_SIMPLE_TYPE(RISCVIOPMPDispState, RISCV_IOPMP_DISP) + +#define TYPE_RISCV_IOPMP_DISP_SS "riscv-iopmp-dispatcher-streamsink" +OBJECT_DECLARE_SIMPLE_TYPE(riscv_iopmp_disp_ss, RISCV_IOPMP_DISP_SS) + +typedef struct riscv_iopmp_disp_ss { + Object parent; +} riscv_iopmp_disp_ss; + +typedef struct SinkMemMapEntry { + StreamSink *sink; + MemMapEntry map; +} SinkMemMapEntry; + +typedef struct RISCVIOPMPDispState { + SysBusDevice parent_obj; + riscv_iopmp_disp_ss txn_info_sink; + SinkMemMapEntry **SinkMemMap; + /* The maximum number of cascading stages of IOPMP */ + uint32_t stage_num; + /* The maximum number of parallel IOPMP devices within a single stage. */ + uint32_t target_num; +} RISCVIOPMPDispState; + +void iopmp_dispatcher_add_target(DeviceState *dev, StreamSink *sink, + uint64_t base, uint64_t size, uint32_t stage, uint32_t id); +#endif From patchwork Thu Jan 9 02:44:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ethan Chen X-Patchwork-Id: 13931895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F139E77188 for ; Thu, 9 Jan 2025 03:13:22 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tViyz-0000k1-P9; Wed, 08 Jan 2025 22:13:17 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyt-0000hm-2K; Wed, 08 Jan 2025 22:13:11 -0500 Received: from 60-248-80-70.hinet-ip.hinet.net ([60.248.80.70] helo=Atcsqr.andestech.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tViyq-0001MK-Rz; Wed, 08 Jan 2025 22:13:10 -0500 Received: from Atcsqr.andestech.com (localhost [127.0.0.2] (may be forged)) by Atcsqr.andestech.com with ESMTP id 5092jniQ028932; Thu, 9 Jan 2025 10:45:49 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from mail.andestech.com (ATCPCS31.andestech.com [10.0.1.89]) by Atcsqr.andestech.com with ESMTPS id 5092j2Fd027904 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 9 Jan 2025 10:45:02 +0800 (+08) (envelope-from ethan84@andestech.com) Received: from atcpcw16.andestech.com (10.0.1.106) by ATCPCS31.andestech.com (10.0.1.89) with Microsoft SMTP Server (TLS) id 14.3.498.0; Thu, 9 Jan 2025 10:45:02 +0800 To: CC: , , , , , , , , , , , , Ethan Chen Subject: [PATCH v9 8/8] hw/riscv/virt: Add IOPMP support Date: Thu, 9 Jan 2025 10:44:41 +0800 Message-ID: <20250109024441.3283671-9-ethan84@andestech.com> X-Mailer: git-send-email 2.42.0.345.gaab89be2eb.dirty In-Reply-To: <20250109024441.3283671-1-ethan84@andestech.com> References: <20250109024441.3283671-1-ethan84@andestech.com> MIME-Version: 1.0 X-Originating-IP: [10.0.1.106] X-DKIM-Results: atcpcs31.andestech.com; dkim=none; X-DNSRBL: X-MAIL: Atcsqr.andestech.com 5092jniQ028932 Received-SPF: pass client-ip=60.248.80.70; envelope-from=ethan84@andestech.com; helo=Atcsqr.andestech.com X-Spam_score_int: -8 X-Spam_score: -0.9 X-Spam_bar: / X-Spam_report: (-0.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, RDNS_DYNAMIC=0.982, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, TVD_RCVD_IP=0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-to: Ethan Chen X-Patchwork-Original-From: Ethan Chen via From: Ethan Chen Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org - Add 'iopmp=on' option to enable IOPMP. It adds iopmp devices virt machine to protect all regions of system memory. Signed-off-by: Ethan Chen --- docs/system/riscv/virt.rst | 7 ++++ hw/riscv/Kconfig | 1 + hw/riscv/virt.c | 75 ++++++++++++++++++++++++++++++++++++++ include/hw/riscv/virt.h | 4 ++ 4 files changed, 87 insertions(+) diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst index 60850970ce..6b5fc1d37d 100644 --- a/docs/system/riscv/virt.rst +++ b/docs/system/riscv/virt.rst @@ -146,6 +146,13 @@ The following machine-specific options are supported: Enables the riscv-iommu-sys platform device. Defaults to 'off'. +- iopmp=[on|off] + + When this option is "on", IOPMP devices are added to machine. IOPMP checks + memory transcations in system memory. This option is assumed to be "off". To + enable the CPU to perform transactions with a specified RRID, use the CPU + option "-cpu ,iopmp=true,iopmp_rrid=" + Running Linux kernel -------------------- diff --git a/hw/riscv/Kconfig b/hw/riscv/Kconfig index e6a0ac1fa1..637438af2c 100644 --- a/hw/riscv/Kconfig +++ b/hw/riscv/Kconfig @@ -68,6 +68,7 @@ config RISCV_VIRT select PLATFORM_BUS select ACPI select ACPI_PCI + select RISCV_IOPMP config SHAKTI_C bool diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c index 2bc5a9dd98..95f8946ae1 100644 --- a/hw/riscv/virt.c +++ b/hw/riscv/virt.c @@ -57,6 +57,8 @@ #include "hw/acpi/aml-build.h" #include "qapi/qapi-visit-common.h" #include "hw/virtio/virtio-iommu.h" +#include "hw/misc/riscv_iopmp.h" +#include "hw/misc/riscv_iopmp_dispatcher.h" /* KVM AIA only supports APLIC MSI. APLIC Wired is always emulated by QEMU. */ static bool virt_use_kvm_aia_aplic_imsic(RISCVVirtAIAType aia_type) @@ -94,6 +96,7 @@ static const MemMapEntry virt_memmap[] = { [VIRT_UART0] = { 0x10000000, 0x100 }, [VIRT_VIRTIO] = { 0x10001000, 0x1000 }, [VIRT_FW_CFG] = { 0x10100000, 0x18 }, + [VIRT_IOPMP] = { 0x10200000, 0x100000 }, [VIRT_FLASH] = { 0x20000000, 0x4000000 }, [VIRT_IMSIC_M] = { 0x24000000, VIRT_IMSIC_MAX_SIZE }, [VIRT_IMSIC_S] = { 0x28000000, VIRT_IMSIC_MAX_SIZE }, @@ -102,6 +105,11 @@ static const MemMapEntry virt_memmap[] = { [VIRT_DRAM] = { 0x80000000, 0x0 }, }; +static const MemMapEntry iopmp_protect_memmap[] = { + /* IOPMP protect all regions by default */ + {0x0, 0xFFFFFFFF}, +}; + /* PCIe high mmio is fixed for RV32 */ #define VIRT32_HIGH_PCIE_MMIO_BASE 0x300000000ULL #define VIRT32_HIGH_PCIE_MMIO_SIZE (4 * GiB) @@ -1117,6 +1125,24 @@ static void create_fdt_iommu(RISCVVirtState *s, uint16_t bdf) bdf + 1, iommu_phandle, bdf + 1, 0xffff - bdf); } +static void create_fdt_iopmp(RISCVVirtState *s, const MemMapEntry *memmap, + uint32_t irq_mmio_phandle) { + g_autofree char *name = NULL; + MachineState *ms = MACHINE(s); + + name = g_strdup_printf("/soc/iopmp@%lx", (long)memmap[VIRT_IOPMP].base); + qemu_fdt_add_subnode(ms->fdt, name); + qemu_fdt_setprop_string(ms->fdt, name, "compatible", "riscv_iopmp"); + qemu_fdt_setprop_cells(ms->fdt, name, "reg", 0x0, memmap[VIRT_IOPMP].base, + 0x0, memmap[VIRT_IOPMP].size); + qemu_fdt_setprop_cell(ms->fdt, name, "interrupt-parent", irq_mmio_phandle); + if (s->aia_type == VIRT_AIA_TYPE_NONE) { + qemu_fdt_setprop_cell(ms->fdt, name, "interrupts", IOPMP_IRQ); + } else { + qemu_fdt_setprop_cells(ms->fdt, name, "interrupts", IOPMP_IRQ, 0x4); + } +} + static void finalize_fdt(RISCVVirtState *s) { uint32_t phandle = 1, irq_mmio_phandle = 1, msi_pcie_phandle = 1; @@ -1141,6 +1167,10 @@ static void finalize_fdt(RISCVVirtState *s) create_fdt_uart(s, virt_memmap, irq_mmio_phandle); create_fdt_rtc(s, virt_memmap, irq_mmio_phandle); + + if (s->have_iopmp) { + create_fdt_iopmp(s, virt_memmap, irq_mmio_phandle); + } } static void create_fdt(RISCVVirtState *s, const MemMapEntry *memmap) @@ -1529,6 +1559,8 @@ static void virt_machine_init(MachineState *machine) DeviceState *mmio_irqchip, *virtio_irqchip, *pcie_irqchip; int i, base_hartid, hart_count; int socket_count = riscv_socket_count(machine); + DeviceState *iopmp_dev, *iopmp_disp_dev; + StreamSink *iopmp_ss, *iopmp_disp_ss; /* Check socket count limit */ if (VIRT_SOCKETS_MAX < socket_count) { @@ -1710,6 +1742,29 @@ static void virt_machine_init(MachineState *machine) } virt_flash_map(s, system_memory); + if (s->have_iopmp) { + iopmp_dev = iopmp_create(memmap[VIRT_IOPMP].base, + qdev_get_gpio_in(DEVICE(mmio_irqchip), IOPMP_IRQ)); + + iopmp_setup_system_memory(iopmp_dev, &iopmp_protect_memmap[0], 1, 0); + + iopmp_disp_dev = qdev_new(TYPE_RISCV_IOPMP_DISP); + qdev_prop_set_uint32(DEVICE(iopmp_disp_dev), "target-num", 1); + qdev_prop_set_uint32(DEVICE(iopmp_disp_dev), "stage-num", 1); + qdev_realize(DEVICE(iopmp_disp_dev), NULL, &error_fatal); + + /* Add memmap inforamtion to dispatcher */ + iopmp_ss = (StreamSink *)&(RISCV_IOPMP(iopmp_dev)->txn_info_sink); + iopmp_dispatcher_add_target(DEVICE(iopmp_disp_dev), iopmp_ss, + iopmp_protect_memmap[0].base, + iopmp_protect_memmap[0].size, + 0, 0); + + iopmp_disp_ss = + (StreamSink *)&(RISCV_IOPMP_DISP(iopmp_disp_dev)->txn_info_sink); + iopmp_setup_sink(iopmp_dev, iopmp_disp_ss); + } + /* load/create device tree */ if (machine->dtb) { machine->fdt = load_device_tree(machine->dtb, &s->fdt_size); @@ -1845,6 +1900,20 @@ static void virt_set_iommu_sys(Object *obj, Visitor *v, const char *name, visit_type_OnOffAuto(v, name, &s->iommu_sys, errp); } +static bool virt_get_iopmp(Object *obj, Error **errp) +{ + RISCVVirtState *s = RISCV_VIRT_MACHINE(obj); + + return s->have_iopmp; +} + +static void virt_set_iopmp(Object *obj, bool value, Error **errp) +{ + RISCVVirtState *s = RISCV_VIRT_MACHINE(obj); + + s->have_iopmp = value; +} + bool virt_is_acpi_enabled(RISCVVirtState *s) { return s->acpi != ON_OFF_AUTO_OFF; @@ -1972,6 +2041,12 @@ static void virt_machine_class_init(ObjectClass *oc, void *data) NULL, NULL); object_class_property_set_description(oc, "iommu-sys", "Enable IOMMU platform device"); + + object_class_property_add_bool(oc, "iopmp", virt_get_iopmp, + virt_set_iopmp); + object_class_property_set_description(oc, "iopmp", + "Set on/off to enable/disable " + "iopmp device"); } static const TypeInfo virt_machine_typeinfo = { diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h index 48a14bea2e..77dcbd5450 100644 --- a/include/hw/riscv/virt.h +++ b/include/hw/riscv/virt.h @@ -55,6 +55,7 @@ struct RISCVVirtState { int fdt_size; bool have_aclint; + bool have_iopmp; RISCVVirtAIAType aia_type; int aia_guests; char *oem_id; @@ -87,11 +88,14 @@ enum { VIRT_PLATFORM_BUS, VIRT_PCIE_ECAM, VIRT_IOMMU_SYS, + VIRT_IOPMP, }; enum { UART0_IRQ = 10, RTC_IRQ = 11, + IOPMP_IRQ = 12, + DMA_IRQ = 13, VIRTIO_IRQ = 1, /* 1 to 8 */ VIRTIO_COUNT = 8, PCIE_IRQ = 0x20, /* 32 to 35 */