From patchwork Wed May 15 11:30:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacek Lawrynowicz X-Patchwork-Id: 13665144 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4DB63C25B7A for ; Wed, 15 May 2024 11:30:20 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DE01610E7D3; Wed, 15 May 2024 11:30:15 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="IGjMTj54"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.11]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6449510E7C6 for ; Wed, 15 May 2024 11:30:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1715772614; x=1747308614; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QAHz8e8CCgi3Z0yKBMdtRKA+bDjqZe6X2b5l6uXBVFs=; b=IGjMTj54bQvSRdPy5saX8X6yu6K99lT3baFSeXsltdwmT5s0h2WWKuE9 zPssxulZw4onCj2ys4x7tyFDGM69Jo/D6+RsCNWICtDC8bYb4J118wwFR nH3n5F3r8VautVA8MUFFnDKu1j/xiyH99+MvB0BZDF29YiPjjFSC6ivsh Jhyyxd9lPRtnsVidXLqfeCsyX12M1HL9nzKMePb75KX53GQ/Rk+sdGJkQ yhvFSGiGyR2U85qpDg8JsCwFx61Xa/oqeiHBnm4Evjbm7jjfL38dwSQBP Zm6HvbCf+oFShY/T57xav2V0va+HPgPyFBmZkMRraGowGUWdMoK2odwWT A==; X-CSE-ConnectionGUID: hiOJvquaSn6W1bwHQRP7Dg== X-CSE-MsgGUID: OEhM30XfRPy9hQy0e3eHDw== X-IronPort-AV: E=McAfee;i="6600,9927,11073"; a="22417649" X-IronPort-AV: E=Sophos;i="6.08,161,1712646000"; d="scan'208";a="22417649" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by fmvoesa105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2024 04:30:14 -0700 X-CSE-ConnectionGUID: c49yLljlSOyR3xJweXa7ug== X-CSE-MsgGUID: yZaQMTpNT1StHuHvsdKLGw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,161,1712646000"; d="scan'208";a="35570483" Received: from jlawryno.igk.intel.com ([10.91.220.59]) by fmviesa004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 May 2024 04:30:12 -0700 From: Jacek Lawrynowicz To: dri-devel@lists.freedesktop.org Cc: oded.gabbay@gmail.com, quic_jhugo@quicinc.com, "Wachowski, Karol" , Jacek Lawrynowicz Subject: [PATCH 2/3] accel/ivpu: Split IP and buttress code Date: Wed, 15 May 2024 13:30:05 +0200 Message-ID: <20240515113006.457472-3-jacek.lawrynowicz@linux.intel.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240515113006.457472-1-jacek.lawrynowicz@linux.intel.com> References: <20240515113006.457472-1-jacek.lawrynowicz@linux.intel.com> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: "Wachowski, Karol" The NPU device consists of two parts: NPU buttress and NPU IP. Buttress is a platform specific part that integrates the NPU IP with the CPU. NPU IP is the platform agnostic part that does the inference. This separation enables support for multiple platforms using a single NPU IP, so for example NPU IP 37XX could be integrated into MTL and LNL platforms. Signed-off-by: Wachowski, Karol Signed-off-by: Jacek Lawrynowicz --- drivers/accel/ivpu/Makefile | 5 +- drivers/accel/ivpu/ivpu_debugfs.c | 2 +- drivers/accel/ivpu/ivpu_drv.c | 13 +- drivers/accel/ivpu/ivpu_drv.h | 33 +- drivers/accel/ivpu/ivpu_fw.c | 20 +- drivers/accel/ivpu/ivpu_hw.c | 310 +++++++ drivers/accel/ivpu/ivpu_hw.h | 191 ++--- drivers/accel/ivpu/ivpu_hw_37xx.c | 1071 ------------------------ drivers/accel/ivpu/ivpu_hw_40xx.c | 1256 ----------------------------- drivers/accel/ivpu/ivpu_hw_btrs.c | 881 ++++++++++++++++++++ drivers/accel/ivpu/ivpu_hw_btrs.h | 46 ++ drivers/accel/ivpu/ivpu_hw_ip.c | 1174 +++++++++++++++++++++++++++ drivers/accel/ivpu/ivpu_hw_ip.h | 36 + drivers/accel/ivpu/ivpu_ipc.c | 6 +- drivers/accel/ivpu/ivpu_job.c | 2 +- 15 files changed, 2566 insertions(+), 2480 deletions(-) create mode 100644 drivers/accel/ivpu/ivpu_hw.c delete mode 100644 drivers/accel/ivpu/ivpu_hw_37xx.c delete mode 100644 drivers/accel/ivpu/ivpu_hw_40xx.c create mode 100644 drivers/accel/ivpu/ivpu_hw_btrs.c create mode 100644 drivers/accel/ivpu/ivpu_hw_btrs.h create mode 100644 drivers/accel/ivpu/ivpu_hw_ip.c create mode 100644 drivers/accel/ivpu/ivpu_hw_ip.h diff --git a/drivers/accel/ivpu/Makefile b/drivers/accel/ivpu/Makefile index e16a9f5c1c89..ebd682a42eb1 100644 --- a/drivers/accel/ivpu/Makefile +++ b/drivers/accel/ivpu/Makefile @@ -6,8 +6,9 @@ intel_vpu-y := \ ivpu_fw.o \ ivpu_fw_log.o \ ivpu_gem.o \ - ivpu_hw_37xx.o \ - ivpu_hw_40xx.o \ + ivpu_hw.o \ + ivpu_hw_btrs.o \ + ivpu_hw_ip.o \ ivpu_ipc.o \ ivpu_job.o \ ivpu_jsm_msg.o \ diff --git a/drivers/accel/ivpu/ivpu_debugfs.c b/drivers/accel/ivpu/ivpu_debugfs.c index b6c7d6a53c79..10d6408c9831 100644 --- a/drivers/accel/ivpu/ivpu_debugfs.c +++ b/drivers/accel/ivpu/ivpu_debugfs.c @@ -409,7 +409,7 @@ void ivpu_debugfs_init(struct ivpu_device *vdev) debugfs_create_file("resume_engine", 0200, debugfs_root, vdev, &ivpu_resume_engine_fops); - if (ivpu_hw_gen(vdev) >= IVPU_HW_40XX) + if (ivpu_hw_ip_gen(vdev) >= IVPU_HW_IP_40XX) debugfs_create_file("fw_profiling_freq_drive", 0200, debugfs_root, vdev, &fw_profiling_freq_fops); } diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c index 130455d39841..4725e3da1216 100644 --- a/drivers/accel/ivpu/ivpu_drv.c +++ b/drivers/accel/ivpu/ivpu_drv.c @@ -464,9 +464,11 @@ static int ivpu_irq_init(struct ivpu_device *vdev) return ret; } + ivpu_irq_handlers_init(vdev); + vdev->irq = pci_irq_vector(pdev, 0); - ret = devm_request_threaded_irq(vdev->drm.dev, vdev->irq, vdev->hw->ops->irq_handler, + ret = devm_request_threaded_irq(vdev->drm.dev, vdev->irq, ivpu_hw_irq_handler, ivpu_irq_thread_handler, IRQF_NO_AUTOEN, DRIVER_NAME, vdev); if (ret) ivpu_err(vdev, "Failed to request an IRQ %d\n", ret); @@ -543,13 +545,10 @@ static int ivpu_dev_init(struct ivpu_device *vdev) if (!vdev->pm) return -ENOMEM; - if (ivpu_hw_gen(vdev) >= IVPU_HW_40XX) { - vdev->hw->ops = &ivpu_hw_40xx_ops; + if (ivpu_hw_ip_gen(vdev) >= IVPU_HW_IP_40XX) vdev->hw->dma_bits = 48; - } else { - vdev->hw->ops = &ivpu_hw_37xx_ops; + else vdev->hw->dma_bits = 38; - } vdev->platform = IVPU_PLATFORM_INVALID; vdev->context_xa_limit.min = IVPU_USER_CONTEXT_MIN_SSID; @@ -578,7 +577,7 @@ static int ivpu_dev_init(struct ivpu_device *vdev) goto err_xa_destroy; /* Init basic HW info based on buttress registers which are accessible before power up */ - ret = ivpu_hw_info_init(vdev); + ret = ivpu_hw_init(vdev); if (ret) goto err_xa_destroy; diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h index 4de7fc0c7026..39df96a7623b 100644 --- a/drivers/accel/ivpu/ivpu_drv.h +++ b/drivers/accel/ivpu/ivpu_drv.h @@ -27,8 +27,13 @@ #define PCI_DEVICE_ID_ARL 0xad1d #define PCI_DEVICE_ID_LNL 0x643e -#define IVPU_HW_37XX 37 -#define IVPU_HW_40XX 40 +#define IVPU_HW_IP_37XX 37 +#define IVPU_HW_IP_40XX 40 +#define IVPU_HW_IP_50XX 50 +#define IVPU_HW_IP_60XX 60 + +#define IVPU_HW_BTRS_MTL 1 +#define IVPU_HW_BTRS_LNL 2 #define IVPU_GLOBAL_CONTEXT_MMU_SSID 0 /* SSID 1 is used by the VPU to represent reserved context */ @@ -198,16 +203,32 @@ static inline u16 ivpu_device_id(struct ivpu_device *vdev) return to_pci_dev(vdev->drm.dev)->device; } -static inline int ivpu_hw_gen(struct ivpu_device *vdev) +static inline int ivpu_hw_ip_gen(struct ivpu_device *vdev) +{ + switch (ivpu_device_id(vdev)) { + case PCI_DEVICE_ID_MTL: + case PCI_DEVICE_ID_ARL: + return IVPU_HW_IP_37XX; + case PCI_DEVICE_ID_LNL: + return IVPU_HW_IP_40XX; + default: + dump_stack(); + ivpu_err(vdev, "Unknown NPU IP generation\n"); + return 0; + } +} + +static inline int ivpu_hw_btrs_gen(struct ivpu_device *vdev) { switch (ivpu_device_id(vdev)) { case PCI_DEVICE_ID_MTL: case PCI_DEVICE_ID_ARL: - return IVPU_HW_37XX; + return IVPU_HW_BTRS_MTL; case PCI_DEVICE_ID_LNL: - return IVPU_HW_40XX; + return IVPU_HW_BTRS_LNL; default: - ivpu_err(vdev, "Unknown NPU device\n"); + dump_stack(); + ivpu_err(vdev, "Unknown buttress generation\n"); return 0; } } diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c index 427cd72bd34f..1fc4befe7461 100644 --- a/drivers/accel/ivpu/ivpu_fw.c +++ b/drivers/accel/ivpu/ivpu_fw.c @@ -54,10 +54,10 @@ static struct { int gen; const char *name; } fw_names[] = { - { IVPU_HW_37XX, "vpu_37xx.bin" }, - { IVPU_HW_37XX, "intel/vpu/vpu_37xx_v0.0.bin" }, - { IVPU_HW_40XX, "vpu_40xx.bin" }, - { IVPU_HW_40XX, "intel/vpu/vpu_40xx_v0.0.bin" }, + { IVPU_HW_IP_37XX, "vpu_37xx.bin" }, + { IVPU_HW_IP_37XX, "intel/vpu/vpu_37xx_v0.0.bin" }, + { IVPU_HW_IP_40XX, "vpu_40xx.bin" }, + { IVPU_HW_IP_40XX, "intel/vpu/vpu_40xx_v0.0.bin" }, }; static int ivpu_fw_request(struct ivpu_device *vdev) @@ -73,7 +73,7 @@ static int ivpu_fw_request(struct ivpu_device *vdev) } for (i = 0; i < ARRAY_SIZE(fw_names); i++) { - if (fw_names[i].gen != ivpu_hw_gen(vdev)) + if (fw_names[i].gen != ivpu_hw_ip_gen(vdev)) continue; ret = firmware_request_nowarn(&vdev->fw->file, fw_names[i].name, vdev->drm.dev); @@ -246,7 +246,7 @@ static int ivpu_fw_update_global_range(struct ivpu_device *vdev) return -EINVAL; } - ivpu_hw_init_range(&vdev->hw->ranges.global, start, size); + ivpu_hw_range_init(&vdev->hw->ranges.global, start, size); return 0; } @@ -511,7 +511,7 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params boot_params->magic = VPU_BOOT_PARAMS_MAGIC; boot_params->vpu_id = to_pci_dev(vdev->drm.dev)->bus->number; - boot_params->frequency = ivpu_hw_reg_pll_freq_get(vdev); + boot_params->frequency = ivpu_hw_pll_freq_get(vdev); /* * This param is a debug firmware feature. It switches default clock @@ -568,9 +568,9 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params boot_params->verbose_tracing_buff_addr = vdev->fw->mem_log_verb->vpu_addr; boot_params->verbose_tracing_buff_size = ivpu_bo_size(vdev->fw->mem_log_verb); - boot_params->punit_telemetry_sram_base = ivpu_hw_reg_telemetry_offset_get(vdev); - boot_params->punit_telemetry_sram_size = ivpu_hw_reg_telemetry_size_get(vdev); - boot_params->vpu_telemetry_enable = ivpu_hw_reg_telemetry_enable_get(vdev); + boot_params->punit_telemetry_sram_base = ivpu_hw_telemetry_offset_get(vdev); + boot_params->punit_telemetry_sram_size = ivpu_hw_telemetry_size_get(vdev); + boot_params->vpu_telemetry_enable = ivpu_hw_telemetry_enable_get(vdev); boot_params->vpu_scheduling_mode = vdev->hw->sched_mode; if (vdev->hw->sched_mode == VPU_SCHEDULING_MODE_HW) boot_params->vpu_focus_present_timer_ms = IVPU_FOCUS_PRESENT_TIMER_MS; diff --git a/drivers/accel/ivpu/ivpu_hw.c b/drivers/accel/ivpu/ivpu_hw.c new file mode 100644 index 000000000000..1850798c3ccf --- /dev/null +++ b/drivers/accel/ivpu/ivpu_hw.c @@ -0,0 +1,310 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020 - 2024 Intel Corporation + */ + +#include "ivpu_drv.h" +#include "ivpu_hw.h" +#include "ivpu_hw_btrs.h" +#include "ivpu_hw_ip.h" + +#include + +static char *platform_to_str(u32 platform) +{ + switch (platform) { + case IVPU_PLATFORM_SILICON: + return "SILICON"; + case IVPU_PLATFORM_SIMICS: + return "SIMICS"; + case IVPU_PLATFORM_FPGA: + return "FPGA"; + default: + return "Invalid platform"; + } +} + +static const struct dmi_system_id dmi_platform_simulation[] = { + { + .ident = "Intel Simics", + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "lnlrvp"), + DMI_MATCH(DMI_BOARD_VERSION, "1.0"), + DMI_MATCH(DMI_BOARD_SERIAL, "123456789"), + }, + }, + { + .ident = "Intel Simics", + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "Simics"), + }, + }, + { } +}; + +static void platform_init(struct ivpu_device *vdev) +{ + if (dmi_check_system(dmi_platform_simulation)) + vdev->platform = IVPU_PLATFORM_SIMICS; + else + vdev->platform = IVPU_PLATFORM_SILICON; + + ivpu_dbg(vdev, MISC, "Platform type: %s (%d)\n", + platform_to_str(vdev->platform), vdev->platform); +} + +static void wa_init(struct ivpu_device *vdev) +{ + vdev->wa.punit_disabled = ivpu_is_fpga(vdev); + vdev->wa.clear_runtime_mem = false; + + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + vdev->wa.interrupt_clear_with_0 = ivpu_hw_btrs_irqs_clear_with_0_mtl(vdev); + + if (ivpu_device_id(vdev) == PCI_DEVICE_ID_LNL) + vdev->wa.disable_clock_relinquish = true; + + IVPU_PRINT_WA(punit_disabled); + IVPU_PRINT_WA(clear_runtime_mem); + IVPU_PRINT_WA(interrupt_clear_with_0); + IVPU_PRINT_WA(disable_clock_relinquish); +} + +static void timeouts_init(struct ivpu_device *vdev) +{ + if (ivpu_is_fpga(vdev)) { + vdev->timeout.boot = 100000; + vdev->timeout.jsm = 50000; + vdev->timeout.tdr = 2000000; + vdev->timeout.reschedule_suspend = 1000; + vdev->timeout.autosuspend = -1; + vdev->timeout.d0i3_entry_msg = 500; + } else if (ivpu_is_simics(vdev)) { + vdev->timeout.boot = 50; + vdev->timeout.jsm = 500; + vdev->timeout.tdr = 10000; + vdev->timeout.reschedule_suspend = 10; + vdev->timeout.autosuspend = -1; + vdev->timeout.d0i3_entry_msg = 100; + } else { + vdev->timeout.boot = 1000; + vdev->timeout.jsm = 500; + vdev->timeout.tdr = 2000; + vdev->timeout.reschedule_suspend = 10; + vdev->timeout.autosuspend = 10; + vdev->timeout.d0i3_entry_msg = 5; + } +} + +static void memory_ranges_init(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) { + ivpu_hw_range_init(&vdev->hw->ranges.global, 0x80000000, SZ_512M); + ivpu_hw_range_init(&vdev->hw->ranges.user, 0xc0000000, 255 * SZ_1M); + ivpu_hw_range_init(&vdev->hw->ranges.shave, 0x180000000, SZ_2G); + ivpu_hw_range_init(&vdev->hw->ranges.dma, 0x200000000, SZ_8G); + } else { + ivpu_hw_range_init(&vdev->hw->ranges.global, 0x80000000, SZ_512M); + ivpu_hw_range_init(&vdev->hw->ranges.user, 0x80000000, SZ_256M); + ivpu_hw_range_init(&vdev->hw->ranges.shave, 0x80000000 + SZ_256M, SZ_2G - SZ_256M); + ivpu_hw_range_init(&vdev->hw->ranges.dma, 0x200000000, SZ_8G); + } +} + +static int wp_enable(struct ivpu_device *vdev) +{ + return ivpu_hw_btrs_wp_drive(vdev, true); +} + +static int wp_disable(struct ivpu_device *vdev) +{ + return ivpu_hw_btrs_wp_drive(vdev, false); +} + +int ivpu_hw_power_up(struct ivpu_device *vdev) +{ + int ret; + + ret = ivpu_hw_btrs_d0i3_disable(vdev); + if (ret) + ivpu_warn(vdev, "Failed to disable D0I3: %d\n", ret); + + ret = wp_enable(vdev); + if (ret) { + ivpu_err(vdev, "Failed to enable workpoint: %d\n", ret); + return ret; + } + + if (ivpu_hw_btrs_gen(vdev) >= IVPU_HW_BTRS_LNL) { + if (IVPU_WA(disable_clock_relinquish)) + ivpu_hw_btrs_clock_relinquish_disable_lnl(vdev); + ivpu_hw_btrs_profiling_freq_reg_set_lnl(vdev); + ivpu_hw_btrs_ats_print_lnl(vdev); + } + + ret = ivpu_hw_ip_host_ss_configure(vdev); + if (ret) { + ivpu_err(vdev, "Failed to configure host SS: %d\n", ret); + return ret; + } + + ivpu_hw_ip_idle_gen_disable(vdev); + + ret = ivpu_hw_btrs_wait_for_clock_res_own_ack(vdev); + if (ret) { + ivpu_err(vdev, "Timed out waiting for clock resource own ACK\n"); + return ret; + } + + ret = ivpu_hw_ip_pwr_domain_enable(vdev); + if (ret) { + ivpu_err(vdev, "Failed to enable power domain: %d\n", ret); + return ret; + } + + ret = ivpu_hw_ip_host_ss_axi_enable(vdev); + if (ret) { + ivpu_err(vdev, "Failed to enable AXI: %d\n", ret); + return ret; + } + + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_LNL) + ivpu_hw_btrs_set_port_arbitration_weights_lnl(vdev); + + ret = ivpu_hw_ip_top_noc_enable(vdev); + if (ret) + ivpu_err(vdev, "Failed to enable TOP NOC: %d\n", ret); + + return ret; +} + +static void save_d0i3_entry_timestamp(struct ivpu_device *vdev) +{ + vdev->hw->d0i3_entry_host_ts = ktime_get_boottime(); + vdev->hw->d0i3_entry_vpu_ts = ivpu_hw_ip_read_perf_timer_counter(vdev); +} + +int ivpu_hw_reset(struct ivpu_device *vdev) +{ + int ret = 0; + + if (ivpu_hw_btrs_ip_reset(vdev)) { + ivpu_err(vdev, "Failed to reset NPU IP\n"); + ret = -EIO; + } + + if (wp_disable(vdev)) { + ivpu_err(vdev, "Failed to disable workpoint\n"); + ret = -EIO; + } + + return ret; +} + +int ivpu_hw_power_down(struct ivpu_device *vdev) +{ + int ret = 0; + + save_d0i3_entry_timestamp(vdev); + + if (!ivpu_hw_is_idle(vdev)) + ivpu_warn(vdev, "NPU not idle during power down\n"); + + if (ivpu_hw_reset(vdev)) { + ivpu_err(vdev, "Failed to reset NPU\n"); + ret = -EIO; + } + + if (ivpu_hw_btrs_d0i3_enable(vdev)) { + ivpu_err(vdev, "Failed to enter D0I3\n"); + ret = -EIO; + } + + return ret; +} + +int ivpu_hw_init(struct ivpu_device *vdev) +{ + ivpu_hw_btrs_info_init(vdev); + ivpu_hw_btrs_freq_ratios_init(vdev); + memory_ranges_init(vdev); + platform_init(vdev); + wa_init(vdev); + timeouts_init(vdev); + + return 0; +} + +int ivpu_hw_boot_fw(struct ivpu_device *vdev) +{ + int ret; + + ivpu_hw_ip_snoop_disable(vdev); + ivpu_hw_ip_tbu_mmu_enable(vdev); + ret = ivpu_hw_ip_soc_cpu_boot(vdev); + if (ret) + ivpu_err(vdev, "Failed to boot SOC CPU: %d\n", ret); + + return ret; +} + +void ivpu_hw_profiling_freq_drive(struct ivpu_device *vdev, bool enable) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) { + vdev->hw->pll.profiling_freq = PLL_PROFILING_FREQ_DEFAULT; + return; + } + + if (enable) + vdev->hw->pll.profiling_freq = PLL_PROFILING_FREQ_HIGH; + else + vdev->hw->pll.profiling_freq = PLL_PROFILING_FREQ_DEFAULT; +} + +void ivpu_irq_handlers_init(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + vdev->hw->irq.ip_irq_handler = ivpu_hw_ip_irq_handler_37xx; + else + vdev->hw->irq.ip_irq_handler = ivpu_hw_ip_irq_handler_40xx; + + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + vdev->hw->irq.btrs_irq_handler = ivpu_hw_btrs_irq_handler_mtl; + else + vdev->hw->irq.btrs_irq_handler = ivpu_hw_btrs_irq_handler_lnl; +} + +void ivpu_hw_irq_enable(struct ivpu_device *vdev) +{ + ivpu_hw_ip_irq_enable(vdev); + ivpu_hw_btrs_irq_enable(vdev); +} + +void ivpu_hw_irq_disable(struct ivpu_device *vdev) +{ + ivpu_hw_btrs_irq_disable(vdev); + ivpu_hw_ip_irq_disable(vdev); +} + +irqreturn_t ivpu_hw_irq_handler(int irq, void *ptr) +{ + bool ip_handled, btrs_handled, wake_thread = false; + struct ivpu_device *vdev = ptr; + + ivpu_hw_btrs_global_int_disable(vdev); + + btrs_handled = ivpu_hw_btrs_irq_handler(vdev, irq); + if (!ivpu_hw_is_idle((vdev)) || !btrs_handled) + ip_handled = ivpu_hw_ip_irq_handler(vdev, irq, &wake_thread); + else + ip_handled = false; + + /* Re-enable global interrupts to re-trigger MSI for pending interrupts */ + ivpu_hw_btrs_global_int_enable(vdev); + + if (wake_thread) + return IRQ_WAKE_THREAD; + if (ip_handled || btrs_handled) + return IRQ_HANDLED; + return IRQ_NONE; +} diff --git a/drivers/accel/ivpu/ivpu_hw.h b/drivers/accel/ivpu/ivpu_hw.h index d247a2e99496..9d400d987d04 100644 --- a/drivers/accel/ivpu/ivpu_hw.h +++ b/drivers/accel/ivpu/ivpu_hw.h @@ -1,39 +1,14 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (C) 2020-2024 Intel Corporation + * Copyright (C) 2020 - 2024 Intel Corporation */ #ifndef __IVPU_HW_H__ #define __IVPU_HW_H__ #include "ivpu_drv.h" - -struct ivpu_hw_ops { - int (*info_init)(struct ivpu_device *vdev); - int (*power_up)(struct ivpu_device *vdev); - int (*boot_fw)(struct ivpu_device *vdev); - int (*power_down)(struct ivpu_device *vdev); - int (*reset)(struct ivpu_device *vdev); - bool (*is_idle)(struct ivpu_device *vdev); - int (*wait_for_idle)(struct ivpu_device *vdev); - void (*wdt_disable)(struct ivpu_device *vdev); - void (*diagnose_failure)(struct ivpu_device *vdev); - u32 (*profiling_freq_get)(struct ivpu_device *vdev); - void (*profiling_freq_drive)(struct ivpu_device *vdev, bool enable); - u32 (*reg_pll_freq_get)(struct ivpu_device *vdev); - u32 (*ratio_to_freq)(struct ivpu_device *vdev, u32 ratio); - u32 (*reg_telemetry_offset_get)(struct ivpu_device *vdev); - u32 (*reg_telemetry_size_get)(struct ivpu_device *vdev); - u32 (*reg_telemetry_enable_get)(struct ivpu_device *vdev); - void (*reg_db_set)(struct ivpu_device *vdev, u32 db_id); - u32 (*reg_ipc_rx_addr_get)(struct ivpu_device *vdev); - u32 (*reg_ipc_rx_count_get)(struct ivpu_device *vdev); - void (*reg_ipc_tx_set)(struct ivpu_device *vdev, u32 vpu_addr); - void (*irq_clear)(struct ivpu_device *vdev); - void (*irq_enable)(struct ivpu_device *vdev); - void (*irq_disable)(struct ivpu_device *vdev); - irqreturn_t (*irq_handler)(int irq, void *ptr); -}; +#include "ivpu_hw_btrs.h" +#include "ivpu_hw_ip.h" struct ivpu_addr_range { resource_size_t start; @@ -41,7 +16,10 @@ struct ivpu_addr_range { }; struct ivpu_hw_info { - const struct ivpu_hw_ops *ops; + struct { + bool (*btrs_irq_handler)(struct ivpu_device *vdev, int irq); + bool (*ip_irq_handler)(struct ivpu_device *vdev, int irq, bool *wake_thread); + } irq; struct { struct ivpu_addr_range global; struct ivpu_addr_range user; @@ -67,140 +45,107 @@ struct ivpu_hw_info { u64 d0i3_entry_vpu_ts; }; -extern const struct ivpu_hw_ops ivpu_hw_37xx_ops; -extern const struct ivpu_hw_ops ivpu_hw_40xx_ops; - -static inline int ivpu_hw_info_init(struct ivpu_device *vdev) -{ - return vdev->hw->ops->info_init(vdev); -}; - -static inline int ivpu_hw_power_up(struct ivpu_device *vdev) -{ - ivpu_dbg(vdev, PM, "HW power up\n"); - - return vdev->hw->ops->power_up(vdev); -}; +int ivpu_hw_init(struct ivpu_device *vdev); +int ivpu_hw_power_up(struct ivpu_device *vdev); +int ivpu_hw_power_down(struct ivpu_device *vdev); +int ivpu_hw_reset(struct ivpu_device *vdev); +int ivpu_hw_boot_fw(struct ivpu_device *vdev); +void ivpu_hw_profiling_freq_drive(struct ivpu_device *vdev, bool enable); +void ivpu_irq_handlers_init(struct ivpu_device *vdev); +void ivpu_hw_irq_enable(struct ivpu_device *vdev); +void ivpu_hw_irq_disable(struct ivpu_device *vdev); +irqreturn_t ivpu_hw_irq_handler(int irq, void *ptr); -static inline int ivpu_hw_boot_fw(struct ivpu_device *vdev) +static inline u32 ivpu_hw_btrs_irq_handler(struct ivpu_device *vdev, int irq) { - return vdev->hw->ops->boot_fw(vdev); -}; - -static inline bool ivpu_hw_is_idle(struct ivpu_device *vdev) -{ - return vdev->hw->ops->is_idle(vdev); -}; - -static inline int ivpu_hw_wait_for_idle(struct ivpu_device *vdev) -{ - return vdev->hw->ops->wait_for_idle(vdev); -}; - -static inline int ivpu_hw_power_down(struct ivpu_device *vdev) -{ - ivpu_dbg(vdev, PM, "HW power down\n"); - - return vdev->hw->ops->power_down(vdev); -}; - -static inline int ivpu_hw_reset(struct ivpu_device *vdev) -{ - ivpu_dbg(vdev, PM, "HW reset\n"); - - return vdev->hw->ops->reset(vdev); -}; - -static inline void ivpu_hw_wdt_disable(struct ivpu_device *vdev) -{ - vdev->hw->ops->wdt_disable(vdev); -}; + return vdev->hw->irq.btrs_irq_handler(vdev, irq); +} -static inline u32 ivpu_hw_profiling_freq_get(struct ivpu_device *vdev) +static inline u32 ivpu_hw_ip_irq_handler(struct ivpu_device *vdev, int irq, bool *wake_thread) { - return vdev->hw->ops->profiling_freq_get(vdev); -}; + return vdev->hw->irq.ip_irq_handler(vdev, irq, wake_thread); +} -static inline void ivpu_hw_profiling_freq_drive(struct ivpu_device *vdev, bool enable) +static inline void ivpu_hw_range_init(struct ivpu_addr_range *range, u64 start, u64 size) { - return vdev->hw->ops->profiling_freq_drive(vdev, enable); -}; + range->start = start; + range->end = start + size; +} -/* Register indirect accesses */ -static inline u32 ivpu_hw_reg_pll_freq_get(struct ivpu_device *vdev) +static inline u64 ivpu_hw_range_size(const struct ivpu_addr_range *range) { - return vdev->hw->ops->reg_pll_freq_get(vdev); -}; + return range->end - range->start; +} static inline u32 ivpu_hw_ratio_to_freq(struct ivpu_device *vdev, u32 ratio) { - return vdev->hw->ops->ratio_to_freq(vdev, ratio); + return ivpu_hw_btrs_ratio_to_freq(vdev, ratio); } -static inline u32 ivpu_hw_reg_telemetry_offset_get(struct ivpu_device *vdev) +static inline void ivpu_hw_irq_clear(struct ivpu_device *vdev) { - return vdev->hw->ops->reg_telemetry_offset_get(vdev); -}; + ivpu_hw_ip_irq_clear(vdev); +} -static inline u32 ivpu_hw_reg_telemetry_size_get(struct ivpu_device *vdev) +static inline u32 ivpu_hw_pll_freq_get(struct ivpu_device *vdev) { - return vdev->hw->ops->reg_telemetry_size_get(vdev); -}; + return ivpu_hw_btrs_pll_freq_get(vdev); +} -static inline u32 ivpu_hw_reg_telemetry_enable_get(struct ivpu_device *vdev) +static inline u32 ivpu_hw_profiling_freq_get(struct ivpu_device *vdev) { - return vdev->hw->ops->reg_telemetry_enable_get(vdev); -}; + return vdev->hw->pll.profiling_freq; +} -static inline void ivpu_hw_reg_db_set(struct ivpu_device *vdev, u32 db_id) +static inline void ivpu_hw_diagnose_failure(struct ivpu_device *vdev) { - vdev->hw->ops->reg_db_set(vdev, db_id); -}; + ivpu_hw_ip_diagnose_failure(vdev); + ivpu_hw_btrs_diagnose_failure(vdev); +} -static inline u32 ivpu_hw_reg_ipc_rx_addr_get(struct ivpu_device *vdev) +static inline u32 ivpu_hw_telemetry_offset_get(struct ivpu_device *vdev) { - return vdev->hw->ops->reg_ipc_rx_addr_get(vdev); -}; + return ivpu_hw_btrs_telemetry_offset_get(vdev); +} -static inline u32 ivpu_hw_reg_ipc_rx_count_get(struct ivpu_device *vdev) +static inline u32 ivpu_hw_telemetry_size_get(struct ivpu_device *vdev) { - return vdev->hw->ops->reg_ipc_rx_count_get(vdev); -}; + return ivpu_hw_btrs_telemetry_size_get(vdev); +} -static inline void ivpu_hw_reg_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr) +static inline u32 ivpu_hw_telemetry_enable_get(struct ivpu_device *vdev) { - vdev->hw->ops->reg_ipc_tx_set(vdev, vpu_addr); -}; + return ivpu_hw_btrs_telemetry_enable_get(vdev); +} -static inline void ivpu_hw_irq_clear(struct ivpu_device *vdev) +static inline bool ivpu_hw_is_idle(struct ivpu_device *vdev) { - vdev->hw->ops->irq_clear(vdev); -}; + return ivpu_hw_btrs_is_idle(vdev); +} -static inline void ivpu_hw_irq_enable(struct ivpu_device *vdev) +static inline int ivpu_hw_wait_for_idle(struct ivpu_device *vdev) { - vdev->hw->ops->irq_enable(vdev); -}; + return ivpu_hw_btrs_wait_for_idle(vdev); +} -static inline void ivpu_hw_irq_disable(struct ivpu_device *vdev) +static inline void ivpu_hw_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr) { - vdev->hw->ops->irq_disable(vdev); -}; + ivpu_hw_ip_ipc_tx_set(vdev, vpu_addr); +} -static inline void ivpu_hw_init_range(struct ivpu_addr_range *range, u64 start, u64 size) +static inline void ivpu_hw_db_set(struct ivpu_device *vdev, u32 db_id) { - range->start = start; - range->end = start + size; + ivpu_hw_ip_db_set(vdev, db_id); } -static inline u64 ivpu_hw_range_size(const struct ivpu_addr_range *range) +static inline u32 ivpu_hw_ipc_rx_addr_get(struct ivpu_device *vdev) { - return range->end - range->start; + return ivpu_hw_ip_ipc_rx_addr_get(vdev); } -static inline void ivpu_hw_diagnose_failure(struct ivpu_device *vdev) +static inline u32 ivpu_hw_ipc_rx_count_get(struct ivpu_device *vdev) { - vdev->hw->ops->diagnose_failure(vdev); + return ivpu_hw_ip_ipc_rx_count_get(vdev); } #endif /* __IVPU_HW_H__ */ diff --git a/drivers/accel/ivpu/ivpu_hw_37xx.c b/drivers/accel/ivpu/ivpu_hw_37xx.c deleted file mode 100644 index fb5046896e10..000000000000 --- a/drivers/accel/ivpu/ivpu_hw_37xx.c +++ /dev/null @@ -1,1071 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Copyright (C) 2020-2024 Intel Corporation - */ - -#include "ivpu_drv.h" -#include "ivpu_fw.h" -#include "ivpu_hw_btrs_mtl_reg.h" -#include "ivpu_hw_37xx_reg.h" -#include "ivpu_hw_reg_io.h" -#include "ivpu_hw.h" -#include "ivpu_ipc.h" -#include "ivpu_mmu.h" -#include "ivpu_pm.h" - -#define TILE_FUSE_ENABLE_BOTH 0x0 -#define TILE_SKU_BOTH 0x3630 - -/* Work point configuration values */ -#define CONFIG_1_TILE 0x01 -#define CONFIG_2_TILE 0x02 -#define PLL_RATIO_5_3 0x01 -#define PLL_RATIO_4_3 0x02 -#define WP_CONFIG(tile, ratio) (((tile) << 8) | (ratio)) -#define WP_CONFIG_1_TILE_5_3_RATIO WP_CONFIG(CONFIG_1_TILE, PLL_RATIO_5_3) -#define WP_CONFIG_1_TILE_4_3_RATIO WP_CONFIG(CONFIG_1_TILE, PLL_RATIO_4_3) -#define WP_CONFIG_2_TILE_5_3_RATIO WP_CONFIG(CONFIG_2_TILE, PLL_RATIO_5_3) -#define WP_CONFIG_2_TILE_4_3_RATIO WP_CONFIG(CONFIG_2_TILE, PLL_RATIO_4_3) -#define WP_CONFIG_0_TILE_PLL_OFF WP_CONFIG(0, 0) - -#define PLL_REF_CLK_FREQ (50 * 1000000) -#define PLL_SIMULATION_FREQ (10 * 1000000) -#define PLL_PROF_CLK_FREQ (38400 * 1000) -#define PLL_DEFAULT_EPP_VALUE 0x80 - -#define TIM_SAFE_ENABLE 0xf1d0dead -#define TIM_WATCHDOG_RESET_VALUE 0xffffffff - -#define TIMEOUT_US (150 * USEC_PER_MSEC) -#define PWR_ISLAND_STATUS_TIMEOUT_US (5 * USEC_PER_MSEC) -#define PLL_TIMEOUT_US (1500 * USEC_PER_MSEC) -#define IDLE_TIMEOUT_US (5 * USEC_PER_MSEC) - -#define ICB_0_IRQ_MASK ((REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT)) | \ - (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT)) | \ - (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT)) | \ - (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT)) | \ - (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT)) | \ - (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT)) | \ - (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT))) - -#define ICB_1_IRQ_MASK ((REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_2_INT)) | \ - (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_3_INT)) | \ - (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_4_INT))) - -#define ICB_0_1_IRQ_MASK ((((u64)ICB_1_IRQ_MASK) << 32) | ICB_0_IRQ_MASK) - -#define BUTTRESS_IRQ_MASK ((REG_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, ATS_ERR)) | \ - (REG_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, UFI_ERR))) - -#define BUTTRESS_ALL_IRQ_MASK (BUTTRESS_IRQ_MASK | \ - (REG_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, FREQ_CHANGE))) - -#define BUTTRESS_IRQ_ENABLE_MASK ((u32)~BUTTRESS_IRQ_MASK) -#define BUTTRESS_IRQ_DISABLE_MASK ((u32)-1) - -#define ITF_FIREWALL_VIOLATION_MASK ((REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, CSS_ROM_CMX)) | \ - (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, CSS_DBG)) | \ - (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, CSS_CTRL)) | \ - (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, DEC400)) | \ - (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, MSS_NCE)) | \ - (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI)) | \ - (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI_CMX))) - -static void ivpu_hw_wa_init(struct ivpu_device *vdev) -{ - vdev->wa.punit_disabled = false; - vdev->wa.clear_runtime_mem = false; - - REGB_WR32(VPU_HW_BTRS_MTL_INTERRUPT_STAT, BUTTRESS_ALL_IRQ_MASK); - if (REGB_RD32(VPU_HW_BTRS_MTL_INTERRUPT_STAT) == BUTTRESS_ALL_IRQ_MASK) { - /* Writing 1s does not clear the interrupt status register */ - vdev->wa.interrupt_clear_with_0 = true; - REGB_WR32(VPU_HW_BTRS_MTL_INTERRUPT_STAT, 0x0); - } - - IVPU_PRINT_WA(punit_disabled); - IVPU_PRINT_WA(clear_runtime_mem); - IVPU_PRINT_WA(interrupt_clear_with_0); -} - -static void ivpu_hw_timeouts_init(struct ivpu_device *vdev) -{ - vdev->timeout.boot = 1000; - vdev->timeout.jsm = 500; - vdev->timeout.tdr = 2000; - vdev->timeout.reschedule_suspend = 10; - vdev->timeout.autosuspend = 10; - vdev->timeout.d0i3_entry_msg = 5; -} - -static int ivpu_pll_wait_for_cmd_send(struct ivpu_device *vdev) -{ - return REGB_POLL_FLD(VPU_HW_BTRS_MTL_WP_REQ_CMD, SEND, 0, PLL_TIMEOUT_US); -} - -/* Send KMD initiated workpoint change */ -static int ivpu_pll_cmd_send(struct ivpu_device *vdev, u16 min_ratio, u16 max_ratio, - u16 target_ratio, u16 config) -{ - int ret; - u32 val; - - ret = ivpu_pll_wait_for_cmd_send(vdev); - if (ret) { - ivpu_err(vdev, "Failed to sync before WP request: %d\n", ret); - return ret; - } - - val = REGB_RD32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0, MIN_RATIO, min_ratio, val); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0, MAX_RATIO, max_ratio, val); - REGB_WR32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0, val); - - val = REGB_RD32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1, TARGET_RATIO, target_ratio, val); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1, EPP, PLL_DEFAULT_EPP_VALUE, val); - REGB_WR32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1, val); - - val = REGB_RD32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD2); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD2, CONFIG, config, val); - REGB_WR32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD2, val); - - val = REGB_RD32(VPU_HW_BTRS_MTL_WP_REQ_CMD); - val = REG_SET_FLD(VPU_HW_BTRS_MTL_WP_REQ_CMD, SEND, val); - REGB_WR32(VPU_HW_BTRS_MTL_WP_REQ_CMD, val); - - ret = ivpu_pll_wait_for_cmd_send(vdev); - if (ret) - ivpu_err(vdev, "Failed to sync after WP request: %d\n", ret); - - return ret; -} - -static int ivpu_pll_wait_for_lock(struct ivpu_device *vdev, bool enable) -{ - u32 exp_val = enable ? 0x1 : 0x0; - - if (IVPU_WA(punit_disabled)) - return 0; - - return REGB_POLL_FLD(VPU_HW_BTRS_MTL_PLL_STATUS, LOCK, exp_val, PLL_TIMEOUT_US); -} - -static int ivpu_pll_wait_for_status_ready(struct ivpu_device *vdev) -{ - if (IVPU_WA(punit_disabled)) - return 0; - - return REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_STATUS, READY, 1, PLL_TIMEOUT_US); -} - -static void ivpu_pll_init_frequency_ratios(struct ivpu_device *vdev) -{ - struct ivpu_hw_info *hw = vdev->hw; - u8 fuse_min_ratio, fuse_max_ratio, fuse_pn_ratio; - u32 fmin_fuse, fmax_fuse; - - fmin_fuse = REGB_RD32(VPU_HW_BTRS_MTL_FMIN_FUSE); - fuse_min_ratio = REG_GET_FLD(VPU_HW_BTRS_MTL_FMIN_FUSE, MIN_RATIO, fmin_fuse); - fuse_pn_ratio = REG_GET_FLD(VPU_HW_BTRS_MTL_FMIN_FUSE, PN_RATIO, fmin_fuse); - - fmax_fuse = REGB_RD32(VPU_HW_BTRS_MTL_FMAX_FUSE); - fuse_max_ratio = REG_GET_FLD(VPU_HW_BTRS_MTL_FMAX_FUSE, MAX_RATIO, fmax_fuse); - - hw->pll.min_ratio = clamp_t(u8, ivpu_pll_min_ratio, fuse_min_ratio, fuse_max_ratio); - hw->pll.max_ratio = clamp_t(u8, ivpu_pll_max_ratio, hw->pll.min_ratio, fuse_max_ratio); - hw->pll.pn_ratio = clamp_t(u8, fuse_pn_ratio, hw->pll.min_ratio, hw->pll.max_ratio); -} - -static int ivpu_hw_37xx_wait_for_vpuip_bar(struct ivpu_device *vdev) -{ - return REGV_POLL_FLD(VPU_37XX_HOST_SS_CPR_RST_CLR, AON, 0, 100); -} - -static int ivpu_pll_drive(struct ivpu_device *vdev, bool enable) -{ - struct ivpu_hw_info *hw = vdev->hw; - u16 target_ratio; - u16 config; - int ret; - - if (IVPU_WA(punit_disabled)) { - ivpu_dbg(vdev, PM, "Skipping PLL request\n"); - return 0; - } - - if (enable) { - target_ratio = hw->pll.pn_ratio; - config = hw->config; - } else { - target_ratio = 0; - config = 0; - } - - ivpu_dbg(vdev, PM, "PLL workpoint request: config 0x%04x pll ratio 0x%x\n", - config, target_ratio); - - ret = ivpu_pll_cmd_send(vdev, hw->pll.min_ratio, hw->pll.max_ratio, target_ratio, config); - if (ret) { - ivpu_err(vdev, "Failed to send PLL workpoint request: %d\n", ret); - return ret; - } - - ret = ivpu_pll_wait_for_lock(vdev, enable); - if (ret) { - ivpu_err(vdev, "Timed out waiting for PLL lock\n"); - return ret; - } - - if (enable) { - ret = ivpu_pll_wait_for_status_ready(vdev); - if (ret) { - ivpu_err(vdev, "Timed out waiting for PLL ready status\n"); - return ret; - } - - ret = ivpu_hw_37xx_wait_for_vpuip_bar(vdev); - if (ret) { - ivpu_err(vdev, "Timed out waiting for NPU IP bar\n"); - return ret; - } - } - - return 0; -} - -static int ivpu_pll_enable(struct ivpu_device *vdev) -{ - return ivpu_pll_drive(vdev, true); -} - -static int ivpu_pll_disable(struct ivpu_device *vdev) -{ - return ivpu_pll_drive(vdev, false); -} - -static void ivpu_boot_host_ss_rst_clr_assert(struct ivpu_device *vdev) -{ - u32 val = 0; - - val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_CLR, TOP_NOC, val); - val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_CLR, DSS_MAS, val); - val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_CLR, MSS_MAS, val); - - REGV_WR32(VPU_37XX_HOST_SS_CPR_RST_CLR, val); -} - -static void ivpu_boot_host_ss_rst_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_SS_CPR_RST_SET); - - if (enable) { - val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, TOP_NOC, val); - val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, DSS_MAS, val); - val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, MSS_MAS, val); - } else { - val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, TOP_NOC, val); - val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, DSS_MAS, val); - val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, MSS_MAS, val); - } - - REGV_WR32(VPU_37XX_HOST_SS_CPR_RST_SET, val); -} - -static void ivpu_boot_host_ss_clk_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_SS_CPR_CLK_SET); - - if (enable) { - val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, TOP_NOC, val); - val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, DSS_MAS, val); - val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, MSS_MAS, val); - } else { - val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, TOP_NOC, val); - val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, DSS_MAS, val); - val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, MSS_MAS, val); - } - - REGV_WR32(VPU_37XX_HOST_SS_CPR_CLK_SET, val); -} - -static int ivpu_boot_noc_qreqn_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_SS_NOC_QREQN); - - if (!REG_TEST_FLD_NUM(VPU_37XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_SS_NOC_QACCEPTN); - - if (!REG_TEST_FLD_NUM(VPU_37XX_HOST_SS_NOC_QACCEPTN, TOP_SOCMMIO, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_SS_NOC_QDENY); - - if (!REG_TEST_FLD_NUM(VPU_37XX_HOST_SS_NOC_QDENY, TOP_SOCMMIO, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_top_noc_qrenqn_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QREQN); - - if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, exp_val, val) || - !REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_top_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QACCEPTN); - - if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QACCEPTN, CPU_CTRL, exp_val, val) || - !REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QACCEPTN, HOSTIF_L2CACHE, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_top_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QDENY); - - if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QDENY, CPU_CTRL, exp_val, val) || - !REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QDENY, HOSTIF_L2CACHE, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_host_ss_configure(struct ivpu_device *vdev) -{ - ivpu_boot_host_ss_rst_clr_assert(vdev); - - return ivpu_boot_noc_qreqn_check(vdev, 0x0); -} - -static void ivpu_boot_vpu_idle_gen_disable(struct ivpu_device *vdev) -{ - REGV_WR32(VPU_37XX_HOST_SS_AON_VPU_IDLE_GEN, 0x0); -} - -static int ivpu_boot_host_ss_axi_drive(struct ivpu_device *vdev, bool enable) -{ - int ret; - u32 val; - - val = REGV_RD32(VPU_37XX_HOST_SS_NOC_QREQN); - if (enable) - val = REG_SET_FLD(VPU_37XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val); - else - val = REG_CLR_FLD(VPU_37XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val); - REGV_WR32(VPU_37XX_HOST_SS_NOC_QREQN, val); - - ret = ivpu_boot_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0); - if (ret) { - ivpu_err(vdev, "Failed qacceptn check: %d\n", ret); - return ret; - } - - ret = ivpu_boot_noc_qdeny_check(vdev, 0x0); - if (ret) - ivpu_err(vdev, "Failed qdeny check: %d\n", ret); - - return ret; -} - -static int ivpu_boot_host_ss_axi_enable(struct ivpu_device *vdev) -{ - return ivpu_boot_host_ss_axi_drive(vdev, true); -} - -static int ivpu_boot_host_ss_top_noc_drive(struct ivpu_device *vdev, bool enable) -{ - int ret; - u32 val; - - val = REGV_RD32(VPU_37XX_TOP_NOC_QREQN); - if (enable) { - val = REG_SET_FLD(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, val); - val = REG_SET_FLD(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); - } else { - val = REG_CLR_FLD(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, val); - val = REG_CLR_FLD(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); - } - REGV_WR32(VPU_37XX_TOP_NOC_QREQN, val); - - ret = ivpu_boot_top_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0); - if (ret) { - ivpu_err(vdev, "Failed qacceptn check: %d\n", ret); - return ret; - } - - ret = ivpu_boot_top_noc_qdeny_check(vdev, 0x0); - if (ret) - ivpu_err(vdev, "Failed qdeny check: %d\n", ret); - - return ret; -} - -static int ivpu_boot_host_ss_top_noc_enable(struct ivpu_device *vdev) -{ - return ivpu_boot_host_ss_top_noc_drive(vdev, true); -} - -static void ivpu_boot_pwr_island_trickle_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0); - - if (enable) - val = REG_SET_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, MSS_CPU, val); - else - val = REG_CLR_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, MSS_CPU, val); - - REGV_WR32(VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, val); -} - -static void ivpu_boot_pwr_island_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0); - - if (enable) - val = REG_SET_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0, MSS_CPU, val); - else - val = REG_CLR_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0, MSS_CPU, val); - - REGV_WR32(VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0, val); -} - -static int ivpu_boot_wait_for_pwr_island_status(struct ivpu_device *vdev, u32 exp_val) -{ - return REGV_POLL_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_STATUS0, MSS_CPU, - exp_val, PWR_ISLAND_STATUS_TIMEOUT_US); -} - -static void ivpu_boot_pwr_island_isolation_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_SS_AON_PWR_ISO_EN0); - - if (enable) - val = REG_SET_FLD(VPU_37XX_HOST_SS_AON_PWR_ISO_EN0, MSS_CPU, val); - else - val = REG_CLR_FLD(VPU_37XX_HOST_SS_AON_PWR_ISO_EN0, MSS_CPU, val); - - REGV_WR32(VPU_37XX_HOST_SS_AON_PWR_ISO_EN0, val); -} - -static void ivpu_boot_dpu_active_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_SS_AON_DPU_ACTIVE); - - if (enable) - val = REG_SET_FLD(VPU_37XX_HOST_SS_AON_DPU_ACTIVE, DPU_ACTIVE, val); - else - val = REG_CLR_FLD(VPU_37XX_HOST_SS_AON_DPU_ACTIVE, DPU_ACTIVE, val); - - REGV_WR32(VPU_37XX_HOST_SS_AON_DPU_ACTIVE, val); -} - -static int ivpu_boot_pwr_domain_enable(struct ivpu_device *vdev) -{ - int ret; - - ivpu_boot_pwr_island_trickle_drive(vdev, true); - ivpu_boot_pwr_island_drive(vdev, true); - - ret = ivpu_boot_wait_for_pwr_island_status(vdev, 0x1); - if (ret) { - ivpu_err(vdev, "Timed out waiting for power island status\n"); - return ret; - } - - ret = ivpu_boot_top_noc_qrenqn_check(vdev, 0x0); - if (ret) { - ivpu_err(vdev, "Failed qrenqn check %d\n", ret); - return ret; - } - - ivpu_boot_host_ss_clk_drive(vdev, true); - ivpu_boot_pwr_island_isolation_drive(vdev, false); - ivpu_boot_host_ss_rst_drive(vdev, true); - ivpu_boot_dpu_active_drive(vdev, true); - - return ret; -} - -static void ivpu_boot_no_snoop_enable(struct ivpu_device *vdev) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES); - - val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, NOSNOOP_OVERRIDE_EN, val); - val = REG_CLR_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AW_NOSNOOP_OVERRIDE, val); - - if (ivpu_is_force_snoop_enabled(vdev)) - val = REG_CLR_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AR_NOSNOOP_OVERRIDE, val); - else - val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AR_NOSNOOP_OVERRIDE, val); - - REGV_WR32(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, val); -} - -static void ivpu_boot_tbu_mmu_enable(struct ivpu_device *vdev) -{ - u32 val = REGV_RD32(VPU_37XX_HOST_IF_TBU_MMUSSIDV); - - val = REG_SET_FLD(VPU_37XX_HOST_IF_TBU_MMUSSIDV, TBU0_AWMMUSSIDV, val); - val = REG_SET_FLD(VPU_37XX_HOST_IF_TBU_MMUSSIDV, TBU0_ARMMUSSIDV, val); - val = REG_SET_FLD(VPU_37XX_HOST_IF_TBU_MMUSSIDV, TBU2_AWMMUSSIDV, val); - val = REG_SET_FLD(VPU_37XX_HOST_IF_TBU_MMUSSIDV, TBU2_ARMMUSSIDV, val); - - REGV_WR32(VPU_37XX_HOST_IF_TBU_MMUSSIDV, val); -} - -static void ivpu_boot_soc_cpu_boot(struct ivpu_device *vdev) -{ - u32 val; - - val = REGV_RD32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC); - val = REG_SET_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTRUN0, val); - - val = REG_CLR_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTVEC, val); - REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); - - val = REG_SET_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val); - REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); - - val = REG_CLR_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val); - REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); - - val = vdev->fw->entry_point >> 9; - REGV_WR32(VPU_37XX_HOST_SS_LOADING_ADDRESS_LO, val); - - val = REG_SET_FLD(VPU_37XX_HOST_SS_LOADING_ADDRESS_LO, DONE, val); - REGV_WR32(VPU_37XX_HOST_SS_LOADING_ADDRESS_LO, val); - - ivpu_dbg(vdev, PM, "Booting firmware, mode: %s\n", - vdev->fw->entry_point == vdev->fw->cold_boot_entry_point ? "cold boot" : "resume"); -} - -static int ivpu_boot_d0i3_drive(struct ivpu_device *vdev, bool enable) -{ - int ret; - u32 val; - - ret = REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US); - if (ret) { - ivpu_err(vdev, "Failed to sync before D0i3 transition: %d\n", ret); - return ret; - } - - val = REGB_RD32(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL); - if (enable) - val = REG_SET_FLD(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, I3, val); - else - val = REG_CLR_FLD(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, I3, val); - REGB_WR32(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, val); - - ret = REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US); - if (ret) - ivpu_err(vdev, "Failed to sync after D0i3 transition: %d\n", ret); - - return ret; -} - -static int ivpu_hw_37xx_info_init(struct ivpu_device *vdev) -{ - struct ivpu_hw_info *hw = vdev->hw; - - hw->tile_fuse = TILE_FUSE_ENABLE_BOTH; - hw->sku = TILE_SKU_BOTH; - hw->config = WP_CONFIG_2_TILE_4_3_RATIO; - hw->sched_mode = ivpu_sched_mode; - - ivpu_pll_init_frequency_ratios(vdev); - - ivpu_hw_init_range(&hw->ranges.global, 0x80000000, SZ_512M); - ivpu_hw_init_range(&hw->ranges.user, 0xc0000000, 255 * SZ_1M); - ivpu_hw_init_range(&hw->ranges.shave, 0x180000000, SZ_2G); - ivpu_hw_init_range(&hw->ranges.dma, 0x200000000, SZ_8G); - - vdev->platform = IVPU_PLATFORM_SILICON; - ivpu_hw_wa_init(vdev); - ivpu_hw_timeouts_init(vdev); - - return 0; -} - -static int ivpu_hw_37xx_ip_reset(struct ivpu_device *vdev) -{ - int ret; - u32 val; - - if (IVPU_WA(punit_disabled)) - return 0; - - ret = REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US); - if (ret) { - ivpu_err(vdev, "Timed out waiting for TRIGGER bit\n"); - return ret; - } - - val = REGB_RD32(VPU_HW_BTRS_MTL_VPU_IP_RESET); - val = REG_SET_FLD(VPU_HW_BTRS_MTL_VPU_IP_RESET, TRIGGER, val); - REGB_WR32(VPU_HW_BTRS_MTL_VPU_IP_RESET, val); - - ret = REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US); - if (ret) - ivpu_err(vdev, "Timed out waiting for RESET completion\n"); - - return ret; -} - -static int ivpu_hw_37xx_reset(struct ivpu_device *vdev) -{ - int ret = 0; - - if (ivpu_hw_37xx_ip_reset(vdev)) { - ivpu_err(vdev, "Failed to reset NPU\n"); - ret = -EIO; - } - - if (ivpu_pll_disable(vdev)) { - ivpu_err(vdev, "Failed to disable PLL\n"); - ret = -EIO; - } - - return ret; -} - -static int ivpu_hw_37xx_d0i3_enable(struct ivpu_device *vdev) -{ - int ret; - - ret = ivpu_boot_d0i3_drive(vdev, true); - if (ret) - ivpu_err(vdev, "Failed to enable D0i3: %d\n", ret); - - udelay(5); /* VPU requires 5 us to complete the transition */ - - return ret; -} - -static int ivpu_hw_37xx_d0i3_disable(struct ivpu_device *vdev) -{ - int ret; - - ret = ivpu_boot_d0i3_drive(vdev, false); - if (ret) - ivpu_err(vdev, "Failed to disable D0i3: %d\n", ret); - - return ret; -} - -static int ivpu_hw_37xx_power_up(struct ivpu_device *vdev) -{ - int ret; - - /* PLL requests may fail when powering down, so issue WP 0 here */ - ret = ivpu_pll_disable(vdev); - if (ret) - ivpu_warn(vdev, "Failed to disable PLL: %d\n", ret); - - ret = ivpu_hw_37xx_d0i3_disable(vdev); - if (ret) - ivpu_warn(vdev, "Failed to disable D0I3: %d\n", ret); - - ret = ivpu_pll_enable(vdev); - if (ret) { - ivpu_err(vdev, "Failed to enable PLL: %d\n", ret); - return ret; - } - - ret = ivpu_boot_host_ss_configure(vdev); - if (ret) { - ivpu_err(vdev, "Failed to configure host SS: %d\n", ret); - return ret; - } - - /* - * The control circuitry for vpu_idle indication logic powers up active. - * To ensure unnecessary low power mode signal from LRT during bring up, - * KMD disables the circuitry prior to bringing up the Main Power island. - */ - ivpu_boot_vpu_idle_gen_disable(vdev); - - ret = ivpu_boot_pwr_domain_enable(vdev); - if (ret) { - ivpu_err(vdev, "Failed to enable power domain: %d\n", ret); - return ret; - } - - ret = ivpu_boot_host_ss_axi_enable(vdev); - if (ret) { - ivpu_err(vdev, "Failed to enable AXI: %d\n", ret); - return ret; - } - - ret = ivpu_boot_host_ss_top_noc_enable(vdev); - if (ret) - ivpu_err(vdev, "Failed to enable TOP NOC: %d\n", ret); - - return ret; -} - -static int ivpu_hw_37xx_boot_fw(struct ivpu_device *vdev) -{ - ivpu_boot_no_snoop_enable(vdev); - ivpu_boot_tbu_mmu_enable(vdev); - ivpu_boot_soc_cpu_boot(vdev); - - return 0; -} - -static bool ivpu_hw_37xx_is_idle(struct ivpu_device *vdev) -{ - u32 val; - - if (IVPU_WA(punit_disabled)) - return true; - - val = REGB_RD32(VPU_HW_BTRS_MTL_VPU_STATUS); - return REG_TEST_FLD(VPU_HW_BTRS_MTL_VPU_STATUS, READY, val) && - REG_TEST_FLD(VPU_HW_BTRS_MTL_VPU_STATUS, IDLE, val); -} - -static int ivpu_hw_37xx_wait_for_idle(struct ivpu_device *vdev) -{ - return REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_STATUS, IDLE, 0x1, IDLE_TIMEOUT_US); -} - -static void ivpu_hw_37xx_save_d0i3_entry_timestamp(struct ivpu_device *vdev) -{ - vdev->hw->d0i3_entry_host_ts = ktime_get_boottime(); - vdev->hw->d0i3_entry_vpu_ts = REGV_RD64(VPU_37XX_CPU_SS_TIM_PERF_FREE_CNT); -} - -static int ivpu_hw_37xx_power_down(struct ivpu_device *vdev) -{ - int ret = 0; - - ivpu_hw_37xx_save_d0i3_entry_timestamp(vdev); - - if (!ivpu_hw_37xx_is_idle(vdev)) - ivpu_warn(vdev, "NPU not idle during power down\n"); - - if (ivpu_hw_37xx_reset(vdev)) { - ivpu_err(vdev, "Failed to reset NPU\n"); - ret = -EIO; - } - - if (ivpu_hw_37xx_d0i3_enable(vdev)) { - ivpu_err(vdev, "Failed to enter D0I3\n"); - ret = -EIO; - } - - return ret; -} - -static void ivpu_hw_37xx_wdt_disable(struct ivpu_device *vdev) -{ - u32 val; - - /* Enable writing and set non-zero WDT value */ - REGV_WR32(VPU_37XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); - REGV_WR32(VPU_37XX_CPU_SS_TIM_WATCHDOG, TIM_WATCHDOG_RESET_VALUE); - - /* Enable writing and disable watchdog timer */ - REGV_WR32(VPU_37XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); - REGV_WR32(VPU_37XX_CPU_SS_TIM_WDOG_EN, 0); - - /* Now clear the timeout interrupt */ - val = REGV_RD32(VPU_37XX_CPU_SS_TIM_GEN_CONFIG); - val = REG_CLR_FLD(VPU_37XX_CPU_SS_TIM_GEN_CONFIG, WDOG_TO_INT_CLR, val); - REGV_WR32(VPU_37XX_CPU_SS_TIM_GEN_CONFIG, val); -} - -static u32 ivpu_hw_37xx_profiling_freq_get(struct ivpu_device *vdev) -{ - return PLL_PROF_CLK_FREQ; -} - -static void ivpu_hw_37xx_profiling_freq_drive(struct ivpu_device *vdev, bool enable) -{ - /* Profiling freq - is a debug feature. Unavailable on VPU 37XX. */ -} - -static u32 ivpu_hw_37xx_ratio_to_freq(struct ivpu_device *vdev, u32 ratio) -{ - u32 pll_clock = PLL_REF_CLK_FREQ * ratio; - u32 cpu_clock; - - if ((vdev->hw->config & 0xff) == PLL_RATIO_4_3) - cpu_clock = pll_clock * 2 / 4; - else - cpu_clock = pll_clock * 2 / 5; - - return cpu_clock; -} - -/* Register indirect accesses */ -static u32 ivpu_hw_37xx_reg_pll_freq_get(struct ivpu_device *vdev) -{ - u32 pll_curr_ratio; - - pll_curr_ratio = REGB_RD32(VPU_HW_BTRS_MTL_CURRENT_PLL); - pll_curr_ratio &= VPU_HW_BTRS_MTL_CURRENT_PLL_RATIO_MASK; - - if (!ivpu_is_silicon(vdev)) - return PLL_SIMULATION_FREQ; - - return ivpu_hw_37xx_ratio_to_freq(vdev, pll_curr_ratio); -} - -static u32 ivpu_hw_37xx_reg_telemetry_offset_get(struct ivpu_device *vdev) -{ - return REGB_RD32(VPU_HW_BTRS_MTL_VPU_TELEMETRY_OFFSET); -} - -static u32 ivpu_hw_37xx_reg_telemetry_size_get(struct ivpu_device *vdev) -{ - return REGB_RD32(VPU_HW_BTRS_MTL_VPU_TELEMETRY_SIZE); -} - -static u32 ivpu_hw_37xx_reg_telemetry_enable_get(struct ivpu_device *vdev) -{ - return REGB_RD32(VPU_HW_BTRS_MTL_VPU_TELEMETRY_ENABLE); -} - -static void ivpu_hw_37xx_reg_db_set(struct ivpu_device *vdev, u32 db_id) -{ - u32 reg_stride = VPU_37XX_CPU_SS_DOORBELL_1 - VPU_37XX_CPU_SS_DOORBELL_0; - u32 val = REG_FLD(VPU_37XX_CPU_SS_DOORBELL_0, SET); - - REGV_WR32I(VPU_37XX_CPU_SS_DOORBELL_0, reg_stride, db_id, val); -} - -static u32 ivpu_hw_37xx_reg_ipc_rx_addr_get(struct ivpu_device *vdev) -{ - return REGV_RD32(VPU_37XX_HOST_SS_TIM_IPC_FIFO_ATM); -} - -static u32 ivpu_hw_37xx_reg_ipc_rx_count_get(struct ivpu_device *vdev) -{ - u32 count = REGV_RD32_SILENT(VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT); - - return REG_GET_FLD(VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT, FILL_LEVEL, count); -} - -static void ivpu_hw_37xx_reg_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr) -{ - REGV_WR32(VPU_37XX_CPU_SS_TIM_IPC_FIFO, vpu_addr); -} - -static void ivpu_hw_37xx_irq_clear(struct ivpu_device *vdev) -{ - REGV_WR64(VPU_37XX_HOST_SS_ICB_CLEAR_0, ICB_0_1_IRQ_MASK); -} - -static void ivpu_hw_37xx_irq_enable(struct ivpu_device *vdev) -{ - REGV_WR32(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, ITF_FIREWALL_VIOLATION_MASK); - REGV_WR64(VPU_37XX_HOST_SS_ICB_ENABLE_0, ICB_0_1_IRQ_MASK); - REGB_WR32(VPU_HW_BTRS_MTL_LOCAL_INT_MASK, BUTTRESS_IRQ_ENABLE_MASK); - REGB_WR32(VPU_HW_BTRS_MTL_GLOBAL_INT_MASK, 0x0); -} - -static void ivpu_hw_37xx_irq_disable(struct ivpu_device *vdev) -{ - REGB_WR32(VPU_HW_BTRS_MTL_GLOBAL_INT_MASK, 0x1); - REGB_WR32(VPU_HW_BTRS_MTL_LOCAL_INT_MASK, BUTTRESS_IRQ_DISABLE_MASK); - REGV_WR64(VPU_37XX_HOST_SS_ICB_ENABLE_0, 0x0ull); - REGV_WR32(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, 0x0); -} - -static void ivpu_hw_37xx_irq_wdt_nce_handler(struct ivpu_device *vdev) -{ - ivpu_pm_trigger_recovery(vdev, "WDT NCE IRQ"); -} - -static void ivpu_hw_37xx_irq_wdt_mss_handler(struct ivpu_device *vdev) -{ - ivpu_hw_wdt_disable(vdev); - ivpu_pm_trigger_recovery(vdev, "WDT MSS IRQ"); -} - -static void ivpu_hw_37xx_irq_noc_firewall_handler(struct ivpu_device *vdev) -{ - ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ"); -} - -/* Handler for IRQs from VPU core (irqV) */ -static bool ivpu_hw_37xx_irqv_handler(struct ivpu_device *vdev, int irq, bool *wake_thread) -{ - u32 status = REGV_RD32(VPU_37XX_HOST_SS_ICB_STATUS_0) & ICB_0_IRQ_MASK; - - if (!status) - return false; - - REGV_WR32(VPU_37XX_HOST_SS_ICB_CLEAR_0, status); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT, status)) - ivpu_mmu_irq_evtq_handler(vdev); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT, status)) - ivpu_ipc_irq_handler(vdev, wake_thread); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT, status)) - ivpu_dbg(vdev, IRQ, "MMU sync complete\n"); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT, status)) - ivpu_mmu_irq_gerr_handler(vdev); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, status)) - ivpu_hw_37xx_irq_wdt_mss_handler(vdev); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT, status)) - ivpu_hw_37xx_irq_wdt_nce_handler(vdev); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT, status)) - ivpu_hw_37xx_irq_noc_firewall_handler(vdev); - - return true; -} - -/* Handler for IRQs from Buttress core (irqB) */ -static bool ivpu_hw_37xx_irqb_handler(struct ivpu_device *vdev, int irq) -{ - u32 status = REGB_RD32(VPU_HW_BTRS_MTL_INTERRUPT_STAT) & BUTTRESS_IRQ_MASK; - bool schedule_recovery = false; - - if (!status) - return false; - - if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, FREQ_CHANGE, status)) - ivpu_dbg(vdev, IRQ, "FREQ_CHANGE irq: %08x", - REGB_RD32(VPU_HW_BTRS_MTL_CURRENT_PLL)); - - if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, ATS_ERR, status)) { - ivpu_err(vdev, "ATS_ERR irq 0x%016llx", REGB_RD64(VPU_HW_BTRS_MTL_ATS_ERR_LOG_0)); - REGB_WR32(VPU_HW_BTRS_MTL_ATS_ERR_CLEAR, 0x1); - schedule_recovery = true; - } - - if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, UFI_ERR, status)) { - u32 ufi_log = REGB_RD32(VPU_HW_BTRS_MTL_UFI_ERR_LOG); - - ivpu_err(vdev, "UFI_ERR irq (0x%08x) opcode: 0x%02lx axi_id: 0x%02lx cq_id: 0x%03lx", - ufi_log, REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, OPCODE, ufi_log), - REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, AXI_ID, ufi_log), - REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, CQ_ID, ufi_log)); - REGB_WR32(VPU_HW_BTRS_MTL_UFI_ERR_CLEAR, 0x1); - schedule_recovery = true; - } - - /* This must be done after interrupts are cleared at the source. */ - if (IVPU_WA(interrupt_clear_with_0)) - /* - * Writing 1 triggers an interrupt, so we can't perform read update write. - * Clear local interrupt status by writing 0 to all bits. - */ - REGB_WR32(VPU_HW_BTRS_MTL_INTERRUPT_STAT, 0x0); - else - REGB_WR32(VPU_HW_BTRS_MTL_INTERRUPT_STAT, status); - - if (schedule_recovery) - ivpu_pm_trigger_recovery(vdev, "Buttress IRQ"); - - return true; -} - -static irqreturn_t ivpu_hw_37xx_irq_handler(int irq, void *ptr) -{ - struct ivpu_device *vdev = ptr; - bool irqv_handled, irqb_handled, wake_thread = false; - - REGB_WR32(VPU_HW_BTRS_MTL_GLOBAL_INT_MASK, 0x1); - - irqv_handled = ivpu_hw_37xx_irqv_handler(vdev, irq, &wake_thread); - irqb_handled = ivpu_hw_37xx_irqb_handler(vdev, irq); - - /* Re-enable global interrupts to re-trigger MSI for pending interrupts */ - REGB_WR32(VPU_HW_BTRS_MTL_GLOBAL_INT_MASK, 0x0); - - if (wake_thread) - return IRQ_WAKE_THREAD; - if (irqv_handled || irqb_handled) - return IRQ_HANDLED; - return IRQ_NONE; -} - -static void ivpu_hw_37xx_diagnose_failure(struct ivpu_device *vdev) -{ - u32 irqv = REGV_RD32(VPU_37XX_HOST_SS_ICB_STATUS_0) & ICB_0_IRQ_MASK; - u32 irqb = REGB_RD32(VPU_HW_BTRS_MTL_INTERRUPT_STAT) & BUTTRESS_IRQ_MASK; - - if (ivpu_hw_37xx_reg_ipc_rx_count_get(vdev)) - ivpu_err(vdev, "IPC FIFO queue not empty, missed IPC IRQ"); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, irqv)) - ivpu_err(vdev, "WDT MSS timeout detected\n"); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT, irqv)) - ivpu_err(vdev, "WDT NCE timeout detected\n"); - - if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT, irqv)) - ivpu_err(vdev, "NOC Firewall irq detected\n"); - - if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, ATS_ERR, irqb)) - ivpu_err(vdev, "ATS_ERR irq 0x%016llx", REGB_RD64(VPU_HW_BTRS_MTL_ATS_ERR_LOG_0)); - - if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, UFI_ERR, irqb)) { - u32 ufi_log = REGB_RD32(VPU_HW_BTRS_MTL_UFI_ERR_LOG); - - ivpu_err(vdev, "UFI_ERR irq (0x%08x) opcode: 0x%02lx axi_id: 0x%02lx cq_id: 0x%03lx", - ufi_log, REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, OPCODE, ufi_log), - REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, AXI_ID, ufi_log), - REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, CQ_ID, ufi_log)); - } -} - -const struct ivpu_hw_ops ivpu_hw_37xx_ops = { - .info_init = ivpu_hw_37xx_info_init, - .power_up = ivpu_hw_37xx_power_up, - .is_idle = ivpu_hw_37xx_is_idle, - .wait_for_idle = ivpu_hw_37xx_wait_for_idle, - .power_down = ivpu_hw_37xx_power_down, - .reset = ivpu_hw_37xx_reset, - .boot_fw = ivpu_hw_37xx_boot_fw, - .wdt_disable = ivpu_hw_37xx_wdt_disable, - .diagnose_failure = ivpu_hw_37xx_diagnose_failure, - .profiling_freq_get = ivpu_hw_37xx_profiling_freq_get, - .profiling_freq_drive = ivpu_hw_37xx_profiling_freq_drive, - .reg_pll_freq_get = ivpu_hw_37xx_reg_pll_freq_get, - .ratio_to_freq = ivpu_hw_37xx_ratio_to_freq, - .reg_telemetry_offset_get = ivpu_hw_37xx_reg_telemetry_offset_get, - .reg_telemetry_size_get = ivpu_hw_37xx_reg_telemetry_size_get, - .reg_telemetry_enable_get = ivpu_hw_37xx_reg_telemetry_enable_get, - .reg_db_set = ivpu_hw_37xx_reg_db_set, - .reg_ipc_rx_addr_get = ivpu_hw_37xx_reg_ipc_rx_addr_get, - .reg_ipc_rx_count_get = ivpu_hw_37xx_reg_ipc_rx_count_get, - .reg_ipc_tx_set = ivpu_hw_37xx_reg_ipc_tx_set, - .irq_clear = ivpu_hw_37xx_irq_clear, - .irq_enable = ivpu_hw_37xx_irq_enable, - .irq_disable = ivpu_hw_37xx_irq_disable, - .irq_handler = ivpu_hw_37xx_irq_handler, -}; diff --git a/drivers/accel/ivpu/ivpu_hw_40xx.c b/drivers/accel/ivpu/ivpu_hw_40xx.c deleted file mode 100644 index 9c6fae00da2a..000000000000 --- a/drivers/accel/ivpu/ivpu_hw_40xx.c +++ /dev/null @@ -1,1256 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * Copyright (C) 2020-2024 Intel Corporation - */ - -#include "ivpu_drv.h" -#include "ivpu_fw.h" -#include "ivpu_hw.h" -#include "ivpu_hw_btrs_lnl_reg.h" -#include "ivpu_hw_40xx_reg.h" -#include "ivpu_hw_reg_io.h" -#include "ivpu_ipc.h" -#include "ivpu_mmu.h" -#include "ivpu_pm.h" - -#include - -#define TILE_MAX_NUM 6 -#define TILE_MAX_MASK 0x3f - -#define LNL_HW_ID 0x4040 - -#define SKU_TILE_SHIFT 0u -#define SKU_TILE_MASK 0x0000ffffu -#define SKU_HW_ID_SHIFT 16u -#define SKU_HW_ID_MASK 0xffff0000u - -#define PLL_CONFIG_DEFAULT 0x0 -#define PLL_CDYN_DEFAULT 0x80 -#define PLL_EPP_DEFAULT 0x80 -#define PLL_REF_CLK_FREQ (50 * 1000000) -#define PLL_RATIO_TO_FREQ(x) ((x) * PLL_REF_CLK_FREQ) - -#define PLL_PROFILING_FREQ_DEFAULT 38400000 -#define PLL_PROFILING_FREQ_HIGH 400000000 - -#define TIM_SAFE_ENABLE 0xf1d0dead -#define TIM_WATCHDOG_RESET_VALUE 0xffffffff - -#define TIMEOUT_US (150 * USEC_PER_MSEC) -#define PWR_ISLAND_STATUS_TIMEOUT_US (5 * USEC_PER_MSEC) -#define PLL_TIMEOUT_US (1500 * USEC_PER_MSEC) -#define IDLE_TIMEOUT_US (5 * USEC_PER_MSEC) - -#define WEIGHTS_DEFAULT 0xf711f711u -#define WEIGHTS_ATS_DEFAULT 0x0000f711u - -#define ICB_0_IRQ_MASK ((REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT)) | \ - (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT)) | \ - (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT)) | \ - (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT)) | \ - (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT)) | \ - (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT)) | \ - (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT))) - -#define ICB_1_IRQ_MASK ((REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_2_INT)) | \ - (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_3_INT)) | \ - (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_4_INT))) - -#define ICB_0_1_IRQ_MASK ((((u64)ICB_1_IRQ_MASK) << 32) | ICB_0_IRQ_MASK) - -#define BUTTRESS_IRQ_MASK ((REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, ATS_ERR)) | \ - (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI0_ERR)) | \ - (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI1_ERR)) | \ - (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR0_ERR)) | \ - (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR1_ERR)) | \ - (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR))) - -#define BUTTRESS_IRQ_ENABLE_MASK ((u32)~BUTTRESS_IRQ_MASK) -#define BUTTRESS_IRQ_DISABLE_MASK ((u32)-1) - -#define ITF_FIREWALL_VIOLATION_MASK ((REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, CSS_ROM_CMX)) | \ - (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, CSS_DBG)) | \ - (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, CSS_CTRL)) | \ - (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, DEC400)) | \ - (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, MSS_NCE)) | \ - (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI)) | \ - (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI_CMX))) - -static char *ivpu_platform_to_str(u32 platform) -{ - switch (platform) { - case IVPU_PLATFORM_SILICON: - return "SILICON"; - case IVPU_PLATFORM_SIMICS: - return "SIMICS"; - case IVPU_PLATFORM_FPGA: - return "FPGA"; - default: - return "Invalid platform"; - } -} - -static const struct dmi_system_id ivpu_dmi_platform_simulation[] = { - { - .ident = "Intel Simics", - .matches = { - DMI_MATCH(DMI_BOARD_NAME, "lnlrvp"), - DMI_MATCH(DMI_BOARD_VERSION, "1.0"), - DMI_MATCH(DMI_BOARD_SERIAL, "123456789"), - }, - }, - { - .ident = "Intel Simics", - .matches = { - DMI_MATCH(DMI_BOARD_NAME, "Simics"), - }, - }, - { } -}; - -static void ivpu_hw_read_platform(struct ivpu_device *vdev) -{ - if (dmi_check_system(ivpu_dmi_platform_simulation)) - vdev->platform = IVPU_PLATFORM_SIMICS; - else - vdev->platform = IVPU_PLATFORM_SILICON; - - ivpu_dbg(vdev, MISC, "Platform type: %s (%d)\n", - ivpu_platform_to_str(vdev->platform), vdev->platform); -} - -static void ivpu_hw_wa_init(struct ivpu_device *vdev) -{ - vdev->wa.punit_disabled = ivpu_is_fpga(vdev); - vdev->wa.clear_runtime_mem = false; - - if (ivpu_hw_gen(vdev) == IVPU_HW_40XX) - vdev->wa.disable_clock_relinquish = true; - - IVPU_PRINT_WA(punit_disabled); - IVPU_PRINT_WA(clear_runtime_mem); - IVPU_PRINT_WA(disable_clock_relinquish); -} - -static void ivpu_hw_timeouts_init(struct ivpu_device *vdev) -{ - if (ivpu_is_fpga(vdev)) { - vdev->timeout.boot = 100000; - vdev->timeout.jsm = 50000; - vdev->timeout.tdr = 2000000; - vdev->timeout.reschedule_suspend = 1000; - vdev->timeout.autosuspend = -1; - vdev->timeout.d0i3_entry_msg = 500; - } else if (ivpu_is_simics(vdev)) { - vdev->timeout.boot = 50; - vdev->timeout.jsm = 500; - vdev->timeout.tdr = 10000; - vdev->timeout.reschedule_suspend = 10; - vdev->timeout.autosuspend = -1; - vdev->timeout.d0i3_entry_msg = 100; - } else { - vdev->timeout.boot = 1000; - vdev->timeout.jsm = 500; - vdev->timeout.tdr = 2000; - vdev->timeout.reschedule_suspend = 10; - vdev->timeout.autosuspend = 10; - vdev->timeout.d0i3_entry_msg = 5; - } -} - -static int ivpu_pll_wait_for_cmd_send(struct ivpu_device *vdev) -{ - return REGB_POLL_FLD(VPU_HW_BTRS_LNL_WP_REQ_CMD, SEND, 0, PLL_TIMEOUT_US); -} - -static int ivpu_pll_cmd_send(struct ivpu_device *vdev, u16 min_ratio, u16 max_ratio, - u16 target_ratio, u16 epp, u16 config, u16 cdyn) -{ - int ret; - u32 val; - - ret = ivpu_pll_wait_for_cmd_send(vdev); - if (ret) { - ivpu_err(vdev, "Failed to sync before WP request: %d\n", ret); - return ret; - } - - val = REGB_RD32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0, MIN_RATIO, min_ratio, val); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0, MAX_RATIO, max_ratio, val); - REGB_WR32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0, val); - - val = REGB_RD32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1, TARGET_RATIO, target_ratio, val); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1, EPP, epp, val); - REGB_WR32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1, val); - - val = REGB_RD32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2, CONFIG, config, val); - val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2, CDYN, cdyn, val); - REGB_WR32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2, val); - - val = REGB_RD32(VPU_HW_BTRS_LNL_WP_REQ_CMD); - val = REG_SET_FLD(VPU_HW_BTRS_LNL_WP_REQ_CMD, SEND, val); - REGB_WR32(VPU_HW_BTRS_LNL_WP_REQ_CMD, val); - - ret = ivpu_pll_wait_for_cmd_send(vdev); - if (ret) - ivpu_err(vdev, "Failed to sync after WP request: %d\n", ret); - - return ret; -} - -static int ivpu_pll_wait_for_status_ready(struct ivpu_device *vdev) -{ - return REGB_POLL_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, READY, 1, PLL_TIMEOUT_US); -} - -static int ivpu_wait_for_clock_own_resource_ack(struct ivpu_device *vdev) -{ - if (ivpu_is_simics(vdev)) - return 0; - - return REGB_POLL_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, CLOCK_RESOURCE_OWN_ACK, 1, TIMEOUT_US); -} - -static void ivpu_pll_init_frequency_ratios(struct ivpu_device *vdev) -{ - struct ivpu_hw_info *hw = vdev->hw; - u8 fuse_min_ratio, fuse_pn_ratio, fuse_max_ratio; - u32 fmin_fuse, fmax_fuse; - - fmin_fuse = REGB_RD32(VPU_HW_BTRS_LNL_FMIN_FUSE); - fuse_min_ratio = REG_GET_FLD(VPU_HW_BTRS_LNL_FMIN_FUSE, MIN_RATIO, fmin_fuse); - fuse_pn_ratio = REG_GET_FLD(VPU_HW_BTRS_LNL_FMIN_FUSE, PN_RATIO, fmin_fuse); - - fmax_fuse = REGB_RD32(VPU_HW_BTRS_LNL_FMAX_FUSE); - fuse_max_ratio = REG_GET_FLD(VPU_HW_BTRS_LNL_FMAX_FUSE, MAX_RATIO, fmax_fuse); - - hw->pll.min_ratio = clamp_t(u8, ivpu_pll_min_ratio, fuse_min_ratio, fuse_max_ratio); - hw->pll.max_ratio = clamp_t(u8, ivpu_pll_max_ratio, hw->pll.min_ratio, fuse_max_ratio); - hw->pll.pn_ratio = clamp_t(u8, fuse_pn_ratio, hw->pll.min_ratio, hw->pll.max_ratio); -} - -static int ivpu_pll_drive(struct ivpu_device *vdev, bool enable) -{ - u16 config = enable ? PLL_CONFIG_DEFAULT : 0; - u16 cdyn = enable ? PLL_CDYN_DEFAULT : 0; - u16 epp = enable ? PLL_EPP_DEFAULT : 0; - struct ivpu_hw_info *hw = vdev->hw; - u16 target_ratio = hw->pll.pn_ratio; - int ret; - - ivpu_dbg(vdev, PM, "PLL workpoint request: %u Hz, epp: 0x%x, config: 0x%x, cdyn: 0x%x\n", - PLL_RATIO_TO_FREQ(target_ratio), epp, config, cdyn); - - ret = ivpu_pll_cmd_send(vdev, hw->pll.min_ratio, hw->pll.max_ratio, - target_ratio, epp, config, cdyn); - if (ret) { - ivpu_err(vdev, "Failed to send PLL workpoint request: %d\n", ret); - return ret; - } - - if (enable) { - ret = ivpu_pll_wait_for_status_ready(vdev); - if (ret) { - ivpu_err(vdev, "Timed out waiting for PLL ready status\n"); - return ret; - } - } - - return 0; -} - -static int ivpu_pll_enable(struct ivpu_device *vdev) -{ - return ivpu_pll_drive(vdev, true); -} - -static int ivpu_pll_disable(struct ivpu_device *vdev) -{ - return ivpu_pll_drive(vdev, false); -} - -static void ivpu_boot_host_ss_rst_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_SS_CPR_RST_EN); - - if (enable) { - val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, TOP_NOC, val); - val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, DSS_MAS, val); - val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, CSS_MAS, val); - } else { - val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, TOP_NOC, val); - val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, DSS_MAS, val); - val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, CSS_MAS, val); - } - - REGV_WR32(VPU_40XX_HOST_SS_CPR_RST_EN, val); -} - -static void ivpu_boot_host_ss_clk_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_SS_CPR_CLK_EN); - - if (enable) { - val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, TOP_NOC, val); - val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, DSS_MAS, val); - val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, CSS_MAS, val); - } else { - val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, TOP_NOC, val); - val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, DSS_MAS, val); - val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, CSS_MAS, val); - } - - REGV_WR32(VPU_40XX_HOST_SS_CPR_CLK_EN, val); -} - -static int ivpu_boot_noc_qreqn_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_SS_NOC_QREQN); - - if (!REG_TEST_FLD_NUM(VPU_40XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_SS_NOC_QACCEPTN); - - if (!REG_TEST_FLD_NUM(VPU_40XX_HOST_SS_NOC_QACCEPTN, TOP_SOCMMIO, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_SS_NOC_QDENY); - - if (!REG_TEST_FLD_NUM(VPU_40XX_HOST_SS_NOC_QDENY, TOP_SOCMMIO, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_top_noc_qrenqn_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_40XX_TOP_NOC_QREQN); - - if (!REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QREQN, CPU_CTRL, exp_val, val) || - !REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_top_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_40XX_TOP_NOC_QACCEPTN); - - if (!REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QACCEPTN, CPU_CTRL, exp_val, val) || - !REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QACCEPTN, HOSTIF_L2CACHE, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_top_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_40XX_TOP_NOC_QDENY); - - if (!REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QDENY, CPU_CTRL, exp_val, val) || - !REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QDENY, HOSTIF_L2CACHE, exp_val, val)) - return -EIO; - - return 0; -} - -static void ivpu_boot_idle_gen_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_SS_AON_IDLE_GEN); - - if (enable) - val = REG_SET_FLD(VPU_40XX_HOST_SS_AON_IDLE_GEN, EN, val); - else - val = REG_CLR_FLD(VPU_40XX_HOST_SS_AON_IDLE_GEN, EN, val); - - REGV_WR32(VPU_40XX_HOST_SS_AON_IDLE_GEN, val); -} - -static int ivpu_boot_host_ss_check(struct ivpu_device *vdev) -{ - int ret; - - ret = ivpu_boot_noc_qreqn_check(vdev, 0x0); - if (ret) { - ivpu_err(vdev, "Failed qreqn check: %d\n", ret); - return ret; - } - - ret = ivpu_boot_noc_qacceptn_check(vdev, 0x0); - if (ret) { - ivpu_err(vdev, "Failed qacceptn check: %d\n", ret); - return ret; - } - - ret = ivpu_boot_noc_qdeny_check(vdev, 0x0); - if (ret) - ivpu_err(vdev, "Failed qdeny check %d\n", ret); - - return ret; -} - -static int ivpu_boot_host_ss_axi_drive(struct ivpu_device *vdev, bool enable) -{ - int ret; - u32 val; - - val = REGV_RD32(VPU_40XX_HOST_SS_NOC_QREQN); - if (enable) - val = REG_SET_FLD(VPU_40XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val); - else - val = REG_CLR_FLD(VPU_40XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val); - REGV_WR32(VPU_40XX_HOST_SS_NOC_QREQN, val); - - ret = ivpu_boot_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0); - if (ret) { - ivpu_err(vdev, "Failed qacceptn check: %d\n", ret); - return ret; - } - - ret = ivpu_boot_noc_qdeny_check(vdev, 0x0); - if (ret) { - ivpu_err(vdev, "Failed qdeny check: %d\n", ret); - return ret; - } - - if (enable) { - REGB_WR32(VPU_HW_BTRS_LNL_PORT_ARBITRATION_WEIGHTS, WEIGHTS_DEFAULT); - REGB_WR32(VPU_HW_BTRS_LNL_PORT_ARBITRATION_WEIGHTS_ATS, WEIGHTS_ATS_DEFAULT); - } - - return ret; -} - -static int ivpu_boot_host_ss_axi_enable(struct ivpu_device *vdev) -{ - return ivpu_boot_host_ss_axi_drive(vdev, true); -} - -static int ivpu_boot_host_ss_top_noc_drive(struct ivpu_device *vdev, bool enable) -{ - int ret; - u32 val; - - val = REGV_RD32(VPU_40XX_TOP_NOC_QREQN); - if (enable) { - val = REG_SET_FLD(VPU_40XX_TOP_NOC_QREQN, CPU_CTRL, val); - val = REG_SET_FLD(VPU_40XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); - } else { - val = REG_CLR_FLD(VPU_40XX_TOP_NOC_QREQN, CPU_CTRL, val); - val = REG_CLR_FLD(VPU_40XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); - } - REGV_WR32(VPU_40XX_TOP_NOC_QREQN, val); - - ret = ivpu_boot_top_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0); - if (ret) { - ivpu_err(vdev, "Failed qacceptn check: %d\n", ret); - return ret; - } - - ret = ivpu_boot_top_noc_qdeny_check(vdev, 0x0); - if (ret) - ivpu_err(vdev, "Failed qdeny check: %d\n", ret); - - return ret; -} - -static int ivpu_boot_host_ss_top_noc_enable(struct ivpu_device *vdev) -{ - return ivpu_boot_host_ss_top_noc_drive(vdev, true); -} - -static void ivpu_boot_pwr_island_trickle_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0); - - if (enable) - val = REG_SET_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, CSS_CPU, val); - else - val = REG_CLR_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, CSS_CPU, val); - - REGV_WR32(VPU_40XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, val); - - if (enable) - ndelay(500); -} - -static void ivpu_boot_pwr_island_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_SS_AON_PWR_ISLAND_EN0); - - if (enable) - val = REG_SET_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_EN0, CSS_CPU, val); - else - val = REG_CLR_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_EN0, CSS_CPU, val); - - REGV_WR32(VPU_40XX_HOST_SS_AON_PWR_ISLAND_EN0, val); - - if (!enable) - ndelay(500); -} - -static int ivpu_boot_wait_for_pwr_island_status(struct ivpu_device *vdev, u32 exp_val) -{ - if (ivpu_is_fpga(vdev)) - return 0; - - return REGV_POLL_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_STATUS0, CSS_CPU, - exp_val, PWR_ISLAND_STATUS_TIMEOUT_US); -} - -static void ivpu_boot_pwr_island_isolation_drive(struct ivpu_device *vdev, bool enable) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_SS_AON_PWR_ISO_EN0); - - if (enable) - val = REG_SET_FLD(VPU_40XX_HOST_SS_AON_PWR_ISO_EN0, CSS_CPU, val); - else - val = REG_CLR_FLD(VPU_40XX_HOST_SS_AON_PWR_ISO_EN0, CSS_CPU, val); - - REGV_WR32(VPU_40XX_HOST_SS_AON_PWR_ISO_EN0, val); -} - -static void ivpu_boot_no_snoop_enable(struct ivpu_device *vdev) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES); - - val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, SNOOP_OVERRIDE_EN, val); - val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AW_SNOOP_OVERRIDE, val); - - if (ivpu_is_force_snoop_enabled(vdev)) - val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AR_SNOOP_OVERRIDE, val); - else - val = REG_CLR_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AR_SNOOP_OVERRIDE, val); - - REGV_WR32(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, val); -} - -static void ivpu_boot_tbu_mmu_enable(struct ivpu_device *vdev) -{ - u32 val = REGV_RD32(VPU_40XX_HOST_IF_TBU_MMUSSIDV); - - val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU0_AWMMUSSIDV, val); - val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU0_ARMMUSSIDV, val); - val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU1_AWMMUSSIDV, val); - val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU1_ARMMUSSIDV, val); - val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU2_AWMMUSSIDV, val); - val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU2_ARMMUSSIDV, val); - - REGV_WR32(VPU_40XX_HOST_IF_TBU_MMUSSIDV, val); -} - -static int ivpu_boot_cpu_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_40XX_CPU_SS_CPR_NOC_QACCEPTN); - - if (!REG_TEST_FLD_NUM(VPU_40XX_CPU_SS_CPR_NOC_QACCEPTN, TOP_MMIO, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_cpu_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val) -{ - u32 val = REGV_RD32(VPU_40XX_CPU_SS_CPR_NOC_QDENY); - - if (!REG_TEST_FLD_NUM(VPU_40XX_CPU_SS_CPR_NOC_QDENY, TOP_MMIO, exp_val, val)) - return -EIO; - - return 0; -} - -static int ivpu_boot_pwr_domain_enable(struct ivpu_device *vdev) -{ - int ret; - - ret = ivpu_wait_for_clock_own_resource_ack(vdev); - if (ret) { - ivpu_err(vdev, "Timed out waiting for clock own resource ACK\n"); - return ret; - } - - ivpu_boot_pwr_island_trickle_drive(vdev, true); - ivpu_boot_pwr_island_drive(vdev, true); - - ret = ivpu_boot_wait_for_pwr_island_status(vdev, 0x1); - if (ret) { - ivpu_err(vdev, "Timed out waiting for power island status\n"); - return ret; - } - - ret = ivpu_boot_top_noc_qrenqn_check(vdev, 0x0); - if (ret) { - ivpu_err(vdev, "Failed qrenqn check %d\n", ret); - return ret; - } - - ivpu_boot_host_ss_clk_drive(vdev, true); - ivpu_boot_host_ss_rst_drive(vdev, true); - ivpu_boot_pwr_island_isolation_drive(vdev, false); - - return ret; -} - -static int ivpu_boot_soc_cpu_drive(struct ivpu_device *vdev, bool enable) -{ - int ret; - u32 val; - - val = REGV_RD32(VPU_40XX_CPU_SS_CPR_NOC_QREQN); - if (enable) - val = REG_SET_FLD(VPU_40XX_CPU_SS_CPR_NOC_QREQN, TOP_MMIO, val); - else - val = REG_CLR_FLD(VPU_40XX_CPU_SS_CPR_NOC_QREQN, TOP_MMIO, val); - REGV_WR32(VPU_40XX_CPU_SS_CPR_NOC_QREQN, val); - - ret = ivpu_boot_cpu_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0); - if (ret) { - ivpu_err(vdev, "Failed qacceptn check: %d\n", ret); - return ret; - } - - ret = ivpu_boot_cpu_noc_qdeny_check(vdev, 0x0); - if (ret) - ivpu_err(vdev, "Failed qdeny check: %d\n", ret); - - return ret; -} - -static int ivpu_boot_soc_cpu_enable(struct ivpu_device *vdev) -{ - return ivpu_boot_soc_cpu_drive(vdev, true); -} - -static int ivpu_boot_soc_cpu_boot(struct ivpu_device *vdev) -{ - int ret; - u32 val; - u64 val64; - - ret = ivpu_boot_soc_cpu_enable(vdev); - if (ret) { - ivpu_err(vdev, "Failed to enable SOC CPU: %d\n", ret); - return ret; - } - - val64 = vdev->fw->entry_point; - val64 <<= ffs(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO_IMAGE_LOCATION_MASK) - 1; - REGV_WR64(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO, val64); - - val = REGV_RD32(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO); - val = REG_SET_FLD(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO, DONE, val); - REGV_WR32(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO, val); - - ivpu_dbg(vdev, PM, "Booting firmware, mode: %s\n", - ivpu_fw_is_cold_boot(vdev) ? "cold boot" : "resume"); - - return 0; -} - -static int ivpu_boot_d0i3_drive(struct ivpu_device *vdev, bool enable) -{ - int ret; - u32 val; - - ret = REGB_POLL_FLD(VPU_HW_BTRS_LNL_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US); - if (ret) { - ivpu_err(vdev, "Failed to sync before D0i3 transition: %d\n", ret); - return ret; - } - - val = REGB_RD32(VPU_HW_BTRS_LNL_D0I3_CONTROL); - if (enable) - val = REG_SET_FLD(VPU_HW_BTRS_LNL_D0I3_CONTROL, I3, val); - else - val = REG_CLR_FLD(VPU_HW_BTRS_LNL_D0I3_CONTROL, I3, val); - REGB_WR32(VPU_HW_BTRS_LNL_D0I3_CONTROL, val); - - ret = REGB_POLL_FLD(VPU_HW_BTRS_LNL_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US); - if (ret) { - ivpu_err(vdev, "Failed to sync after D0i3 transition: %d\n", ret); - return ret; - } - - return 0; -} - -static bool ivpu_tile_disable_check(u32 config) -{ - /* Allowed values: 0 or one bit from range 0-5 (6 tiles) */ - if (config == 0) - return true; - - if (config > BIT(TILE_MAX_NUM - 1)) - return false; - - if ((config & (config - 1)) == 0) - return true; - - return false; -} - -static int ivpu_hw_40xx_info_init(struct ivpu_device *vdev) -{ - struct ivpu_hw_info *hw = vdev->hw; - u32 tile_disable; - u32 fuse; - - fuse = REGB_RD32(VPU_HW_BTRS_LNL_TILE_FUSE); - if (!REG_TEST_FLD(VPU_HW_BTRS_LNL_TILE_FUSE, VALID, fuse)) { - ivpu_err(vdev, "Fuse: invalid (0x%x)\n", fuse); - return -EIO; - } - - tile_disable = REG_GET_FLD(VPU_HW_BTRS_LNL_TILE_FUSE, CONFIG, fuse); - if (!ivpu_tile_disable_check(tile_disable)) { - ivpu_err(vdev, "Fuse: Invalid tile disable config (0x%x)\n", tile_disable); - return -EIO; - } - - if (tile_disable) - ivpu_dbg(vdev, MISC, "Fuse: %d tiles enabled. Tile number %d disabled\n", - TILE_MAX_NUM - 1, ffs(tile_disable) - 1); - else - ivpu_dbg(vdev, MISC, "Fuse: All %d tiles enabled\n", TILE_MAX_NUM); - - hw->sched_mode = ivpu_sched_mode; - hw->tile_fuse = tile_disable; - hw->pll.profiling_freq = PLL_PROFILING_FREQ_DEFAULT; - - ivpu_pll_init_frequency_ratios(vdev); - - ivpu_hw_init_range(&vdev->hw->ranges.global, 0x80000000, SZ_512M); - ivpu_hw_init_range(&vdev->hw->ranges.user, 0x80000000, SZ_256M); - ivpu_hw_init_range(&vdev->hw->ranges.shave, 0x80000000 + SZ_256M, SZ_2G - SZ_256M); - ivpu_hw_init_range(&vdev->hw->ranges.dma, 0x200000000, SZ_8G); - - ivpu_hw_read_platform(vdev); - ivpu_hw_wa_init(vdev); - ivpu_hw_timeouts_init(vdev); - - return 0; -} - -static int ivpu_hw_40xx_ip_reset(struct ivpu_device *vdev) -{ - int ret; - u32 val; - - ret = REGB_POLL_FLD(VPU_HW_BTRS_LNL_IP_RESET, TRIGGER, 0, TIMEOUT_US); - if (ret) { - ivpu_err(vdev, "Wait for *_TRIGGER timed out\n"); - return ret; - } - - val = REGB_RD32(VPU_HW_BTRS_LNL_IP_RESET); - val = REG_SET_FLD(VPU_HW_BTRS_LNL_IP_RESET, TRIGGER, val); - REGB_WR32(VPU_HW_BTRS_LNL_IP_RESET, val); - - ret = REGB_POLL_FLD(VPU_HW_BTRS_LNL_IP_RESET, TRIGGER, 0, TIMEOUT_US); - if (ret) - ivpu_err(vdev, "Timed out waiting for RESET completion\n"); - - return ret; -} - -static int ivpu_hw_40xx_reset(struct ivpu_device *vdev) -{ - int ret = 0; - - if (ivpu_hw_40xx_ip_reset(vdev)) { - ivpu_err(vdev, "Failed to reset NPU IP\n"); - ret = -EIO; - } - - if (ivpu_pll_disable(vdev)) { - ivpu_err(vdev, "Failed to disable PLL\n"); - ret = -EIO; - } - - return ret; -} - -static int ivpu_hw_40xx_d0i3_enable(struct ivpu_device *vdev) -{ - int ret; - - if (IVPU_WA(punit_disabled)) - return 0; - - ret = ivpu_boot_d0i3_drive(vdev, true); - if (ret) - ivpu_err(vdev, "Failed to enable D0i3: %d\n", ret); - - udelay(5); /* VPU requires 5 us to complete the transition */ - - return ret; -} - -static int ivpu_hw_40xx_d0i3_disable(struct ivpu_device *vdev) -{ - int ret; - - if (IVPU_WA(punit_disabled)) - return 0; - - ret = ivpu_boot_d0i3_drive(vdev, false); - if (ret) - ivpu_err(vdev, "Failed to disable D0i3: %d\n", ret); - - return ret; -} - -static void ivpu_hw_40xx_profiling_freq_reg_set(struct ivpu_device *vdev) -{ - u32 val = REGB_RD32(VPU_HW_BTRS_LNL_VPU_STATUS); - - if (vdev->hw->pll.profiling_freq == PLL_PROFILING_FREQ_DEFAULT) - val = REG_CLR_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, PERF_CLK, val); - else - val = REG_SET_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, PERF_CLK, val); - - REGB_WR32(VPU_HW_BTRS_LNL_VPU_STATUS, val); -} - -static void ivpu_hw_40xx_ats_print(struct ivpu_device *vdev) -{ - ivpu_dbg(vdev, MISC, "Buttress ATS: %s\n", - REGB_RD32(VPU_HW_BTRS_LNL_HM_ATS) ? "Enable" : "Disable"); -} - -static void ivpu_hw_40xx_clock_relinquish_disable(struct ivpu_device *vdev) -{ - u32 val = REGB_RD32(VPU_HW_BTRS_LNL_VPU_STATUS); - - val = REG_SET_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, DISABLE_CLK_RELINQUISH, val); - REGB_WR32(VPU_HW_BTRS_LNL_VPU_STATUS, val); -} - -static int ivpu_hw_40xx_power_up(struct ivpu_device *vdev) -{ - int ret; - - ret = ivpu_hw_40xx_d0i3_disable(vdev); - if (ret) - ivpu_warn(vdev, "Failed to disable D0I3: %d\n", ret); - - ret = ivpu_pll_enable(vdev); - if (ret) { - ivpu_err(vdev, "Failed to enable PLL: %d\n", ret); - return ret; - } - - if (IVPU_WA(disable_clock_relinquish)) - ivpu_hw_40xx_clock_relinquish_disable(vdev); - ivpu_hw_40xx_profiling_freq_reg_set(vdev); - ivpu_hw_40xx_ats_print(vdev); - - ret = ivpu_boot_host_ss_check(vdev); - if (ret) { - ivpu_err(vdev, "Failed to configure host SS: %d\n", ret); - return ret; - } - - ivpu_boot_idle_gen_drive(vdev, false); - - ret = ivpu_boot_pwr_domain_enable(vdev); - if (ret) { - ivpu_err(vdev, "Failed to enable power domain: %d\n", ret); - return ret; - } - - ret = ivpu_boot_host_ss_axi_enable(vdev); - if (ret) { - ivpu_err(vdev, "Failed to enable AXI: %d\n", ret); - return ret; - } - - ret = ivpu_boot_host_ss_top_noc_enable(vdev); - if (ret) - ivpu_err(vdev, "Failed to enable TOP NOC: %d\n", ret); - - return ret; -} - -static int ivpu_hw_40xx_boot_fw(struct ivpu_device *vdev) -{ - int ret; - - ivpu_boot_no_snoop_enable(vdev); - ivpu_boot_tbu_mmu_enable(vdev); - - ret = ivpu_boot_soc_cpu_boot(vdev); - if (ret) - ivpu_err(vdev, "Failed to boot SOC CPU: %d\n", ret); - - return ret; -} - -static bool ivpu_hw_40xx_is_idle(struct ivpu_device *vdev) -{ - u32 val; - - if (IVPU_WA(punit_disabled)) - return true; - - val = REGB_RD32(VPU_HW_BTRS_LNL_VPU_STATUS); - return REG_TEST_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, READY, val) && - REG_TEST_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, IDLE, val); -} - -static int ivpu_hw_40xx_wait_for_idle(struct ivpu_device *vdev) -{ - return REGB_POLL_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, IDLE, 0x1, IDLE_TIMEOUT_US); -} - -static void ivpu_hw_40xx_save_d0i3_entry_timestamp(struct ivpu_device *vdev) -{ - vdev->hw->d0i3_entry_host_ts = ktime_get_boottime(); - vdev->hw->d0i3_entry_vpu_ts = REGV_RD64(VPU_40XX_CPU_SS_TIM_PERF_EXT_FREE_CNT); -} - -static int ivpu_hw_40xx_power_down(struct ivpu_device *vdev) -{ - int ret = 0; - - ivpu_hw_40xx_save_d0i3_entry_timestamp(vdev); - - if (!ivpu_hw_40xx_is_idle(vdev) && ivpu_hw_40xx_ip_reset(vdev)) - ivpu_warn(vdev, "Failed to reset the NPU\n"); - - if (ivpu_pll_disable(vdev)) { - ivpu_err(vdev, "Failed to disable PLL\n"); - ret = -EIO; - } - - if (ivpu_hw_40xx_d0i3_enable(vdev)) { - ivpu_err(vdev, "Failed to enter D0I3\n"); - ret = -EIO; - } - - return ret; -} - -static void ivpu_hw_40xx_wdt_disable(struct ivpu_device *vdev) -{ - u32 val; - - REGV_WR32(VPU_40XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); - REGV_WR32(VPU_40XX_CPU_SS_TIM_WATCHDOG, TIM_WATCHDOG_RESET_VALUE); - - REGV_WR32(VPU_40XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); - REGV_WR32(VPU_40XX_CPU_SS_TIM_WDOG_EN, 0); - - val = REGV_RD32(VPU_40XX_CPU_SS_TIM_GEN_CONFIG); - val = REG_CLR_FLD(VPU_40XX_CPU_SS_TIM_GEN_CONFIG, WDOG_TO_INT_CLR, val); - REGV_WR32(VPU_40XX_CPU_SS_TIM_GEN_CONFIG, val); -} - -static u32 ivpu_hw_40xx_profiling_freq_get(struct ivpu_device *vdev) -{ - return vdev->hw->pll.profiling_freq; -} - -static void ivpu_hw_40xx_profiling_freq_drive(struct ivpu_device *vdev, bool enable) -{ - if (enable) - vdev->hw->pll.profiling_freq = PLL_PROFILING_FREQ_HIGH; - else - vdev->hw->pll.profiling_freq = PLL_PROFILING_FREQ_DEFAULT; -} - -/* Register indirect accesses */ -static u32 ivpu_hw_40xx_reg_pll_freq_get(struct ivpu_device *vdev) -{ - u32 pll_curr_ratio; - - pll_curr_ratio = REGB_RD32(VPU_HW_BTRS_LNL_PLL_FREQ); - pll_curr_ratio &= VPU_HW_BTRS_LNL_PLL_FREQ_RATIO_MASK; - - return PLL_RATIO_TO_FREQ(pll_curr_ratio); -} - -static u32 ivpu_hw_40xx_ratio_to_freq(struct ivpu_device *vdev, u32 ratio) -{ - return PLL_RATIO_TO_FREQ(ratio); -} - -static u32 ivpu_hw_40xx_reg_telemetry_offset_get(struct ivpu_device *vdev) -{ - return REGB_RD32(VPU_HW_BTRS_LNL_VPU_TELEMETRY_OFFSET); -} - -static u32 ivpu_hw_40xx_reg_telemetry_size_get(struct ivpu_device *vdev) -{ - return REGB_RD32(VPU_HW_BTRS_LNL_VPU_TELEMETRY_SIZE); -} - -static u32 ivpu_hw_40xx_reg_telemetry_enable_get(struct ivpu_device *vdev) -{ - return REGB_RD32(VPU_HW_BTRS_LNL_VPU_TELEMETRY_ENABLE); -} - -static void ivpu_hw_40xx_reg_db_set(struct ivpu_device *vdev, u32 db_id) -{ - u32 reg_stride = VPU_40XX_CPU_SS_DOORBELL_1 - VPU_40XX_CPU_SS_DOORBELL_0; - u32 val = REG_FLD(VPU_40XX_CPU_SS_DOORBELL_0, SET); - - REGV_WR32I(VPU_40XX_CPU_SS_DOORBELL_0, reg_stride, db_id, val); -} - -static u32 ivpu_hw_40xx_reg_ipc_rx_addr_get(struct ivpu_device *vdev) -{ - return REGV_RD32(VPU_40XX_HOST_SS_TIM_IPC_FIFO_ATM); -} - -static u32 ivpu_hw_40xx_reg_ipc_rx_count_get(struct ivpu_device *vdev) -{ - u32 count = REGV_RD32_SILENT(VPU_40XX_HOST_SS_TIM_IPC_FIFO_STAT); - - return REG_GET_FLD(VPU_40XX_HOST_SS_TIM_IPC_FIFO_STAT, FILL_LEVEL, count); -} - -static void ivpu_hw_40xx_reg_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr) -{ - REGV_WR32(VPU_40XX_CPU_SS_TIM_IPC_FIFO, vpu_addr); -} - -static void ivpu_hw_40xx_irq_clear(struct ivpu_device *vdev) -{ - REGV_WR64(VPU_40XX_HOST_SS_ICB_CLEAR_0, ICB_0_1_IRQ_MASK); -} - -static void ivpu_hw_40xx_irq_enable(struct ivpu_device *vdev) -{ - REGV_WR32(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, ITF_FIREWALL_VIOLATION_MASK); - REGV_WR64(VPU_40XX_HOST_SS_ICB_ENABLE_0, ICB_0_1_IRQ_MASK); - REGB_WR32(VPU_HW_BTRS_LNL_LOCAL_INT_MASK, BUTTRESS_IRQ_ENABLE_MASK); - REGB_WR32(VPU_HW_BTRS_LNL_GLOBAL_INT_MASK, 0x0); -} - -static void ivpu_hw_40xx_irq_disable(struct ivpu_device *vdev) -{ - REGB_WR32(VPU_HW_BTRS_LNL_GLOBAL_INT_MASK, 0x1); - REGB_WR32(VPU_HW_BTRS_LNL_LOCAL_INT_MASK, BUTTRESS_IRQ_DISABLE_MASK); - REGV_WR64(VPU_40XX_HOST_SS_ICB_ENABLE_0, 0x0ull); - REGV_WR32(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, 0x0ul); -} - -static void ivpu_hw_40xx_irq_wdt_nce_handler(struct ivpu_device *vdev) -{ - /* TODO: For LNN hang consider engine reset instead of full recovery */ - ivpu_pm_trigger_recovery(vdev, "WDT NCE IRQ"); -} - -static void ivpu_hw_40xx_irq_wdt_mss_handler(struct ivpu_device *vdev) -{ - ivpu_hw_wdt_disable(vdev); - ivpu_pm_trigger_recovery(vdev, "WDT MSS IRQ"); -} - -static void ivpu_hw_40xx_irq_noc_firewall_handler(struct ivpu_device *vdev) -{ - ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ"); -} - -/* Handler for IRQs from VPU core (irqV) */ -static bool ivpu_hw_40xx_irqv_handler(struct ivpu_device *vdev, int irq, bool *wake_thread) -{ - u32 status = REGV_RD32(VPU_40XX_HOST_SS_ICB_STATUS_0) & ICB_0_IRQ_MASK; - - if (!status) - return false; - - REGV_WR32(VPU_40XX_HOST_SS_ICB_CLEAR_0, status); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT, status)) - ivpu_mmu_irq_evtq_handler(vdev); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT, status)) - ivpu_ipc_irq_handler(vdev, wake_thread); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT, status)) - ivpu_dbg(vdev, IRQ, "MMU sync complete\n"); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT, status)) - ivpu_mmu_irq_gerr_handler(vdev); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, status)) - ivpu_hw_40xx_irq_wdt_mss_handler(vdev); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT, status)) - ivpu_hw_40xx_irq_wdt_nce_handler(vdev); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT, status)) - ivpu_hw_40xx_irq_noc_firewall_handler(vdev); - - return true; -} - -/* Handler for IRQs from Buttress core (irqB) */ -static bool ivpu_hw_40xx_irqb_handler(struct ivpu_device *vdev, int irq) -{ - bool schedule_recovery = false; - u32 status = REGB_RD32(VPU_HW_BTRS_LNL_INTERRUPT_STAT) & BUTTRESS_IRQ_MASK; - - if (!status) - return false; - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status)) - ivpu_dbg(vdev, IRQ, "FREQ_CHANGE"); - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, ATS_ERR, status)) { - ivpu_err(vdev, "ATS_ERR LOG1 0x%08x ATS_ERR_LOG2 0x%08x\n", - REGB_RD32(VPU_HW_BTRS_LNL_ATS_ERR_LOG1), - REGB_RD32(VPU_HW_BTRS_LNL_ATS_ERR_LOG2)); - REGB_WR32(VPU_HW_BTRS_LNL_ATS_ERR_CLEAR, 0x1); - schedule_recovery = true; - } - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI0_ERR, status)) { - ivpu_err(vdev, "CFI0_ERR 0x%08x", REGB_RD32(VPU_HW_BTRS_LNL_CFI0_ERR_LOG)); - REGB_WR32(VPU_HW_BTRS_LNL_CFI0_ERR_CLEAR, 0x1); - schedule_recovery = true; - } - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI1_ERR, status)) { - ivpu_err(vdev, "CFI1_ERR 0x%08x", REGB_RD32(VPU_HW_BTRS_LNL_CFI1_ERR_LOG)); - REGB_WR32(VPU_HW_BTRS_LNL_CFI1_ERR_CLEAR, 0x1); - schedule_recovery = true; - } - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR0_ERR, status)) { - ivpu_err(vdev, "IMR_ERR_CFI0 LOW: 0x%08x HIGH: 0x%08x", - REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_LOW), - REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_HIGH)); - REGB_WR32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_CLEAR, 0x1); - schedule_recovery = true; - } - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR1_ERR, status)) { - ivpu_err(vdev, "IMR_ERR_CFI1 LOW: 0x%08x HIGH: 0x%08x", - REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_LOW), - REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_HIGH)); - REGB_WR32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_CLEAR, 0x1); - schedule_recovery = true; - } - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, status)) { - ivpu_err(vdev, "Survivability error detected\n"); - schedule_recovery = true; - } - - /* This must be done after interrupts are cleared at the source. */ - REGB_WR32(VPU_HW_BTRS_LNL_INTERRUPT_STAT, status); - - if (schedule_recovery) - ivpu_pm_trigger_recovery(vdev, "Buttress IRQ"); - - return true; -} - -static irqreturn_t ivpu_hw_40xx_irq_handler(int irq, void *ptr) -{ - bool irqv_handled, irqb_handled, wake_thread = false; - struct ivpu_device *vdev = ptr; - - REGB_WR32(VPU_HW_BTRS_LNL_GLOBAL_INT_MASK, 0x1); - - irqv_handled = ivpu_hw_40xx_irqv_handler(vdev, irq, &wake_thread); - irqb_handled = ivpu_hw_40xx_irqb_handler(vdev, irq); - - /* Re-enable global interrupts to re-trigger MSI for pending interrupts */ - REGB_WR32(VPU_HW_BTRS_LNL_GLOBAL_INT_MASK, 0x0); - - if (wake_thread) - return IRQ_WAKE_THREAD; - if (irqv_handled || irqb_handled) - return IRQ_HANDLED; - return IRQ_NONE; -} - -static void ivpu_hw_40xx_diagnose_failure(struct ivpu_device *vdev) -{ - u32 irqv = REGV_RD32(VPU_40XX_HOST_SS_ICB_STATUS_0) & ICB_0_IRQ_MASK; - u32 irqb = REGB_RD32(VPU_HW_BTRS_LNL_INTERRUPT_STAT) & BUTTRESS_IRQ_MASK; - - if (ivpu_hw_40xx_reg_ipc_rx_count_get(vdev)) - ivpu_err(vdev, "IPC FIFO queue not empty, missed IPC IRQ"); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, irqv)) - ivpu_err(vdev, "WDT MSS timeout detected\n"); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT, irqv)) - ivpu_err(vdev, "WDT NCE timeout detected\n"); - - if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT, irqv)) - ivpu_err(vdev, "NOC Firewall irq detected\n"); - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, ATS_ERR, irqb)) { - ivpu_err(vdev, "ATS_ERR_LOG1 0x%08x ATS_ERR_LOG2 0x%08x\n", - REGB_RD32(VPU_HW_BTRS_LNL_ATS_ERR_LOG1), - REGB_RD32(VPU_HW_BTRS_LNL_ATS_ERR_LOG2)); - } - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI0_ERR, irqb)) - ivpu_err(vdev, "CFI0_ERR_LOG 0x%08x\n", REGB_RD32(VPU_HW_BTRS_LNL_CFI0_ERR_LOG)); - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI1_ERR, irqb)) - ivpu_err(vdev, "CFI1_ERR_LOG 0x%08x\n", REGB_RD32(VPU_HW_BTRS_LNL_CFI1_ERR_LOG)); - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR0_ERR, irqb)) - ivpu_err(vdev, "IMR_ERR_CFI0 LOW: 0x%08x HIGH: 0x%08x\n", - REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_LOW), - REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_HIGH)); - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR1_ERR, irqb)) - ivpu_err(vdev, "IMR_ERR_CFI1 LOW: 0x%08x HIGH: 0x%08x\n", - REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_LOW), - REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_HIGH)); - - if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, irqb)) - ivpu_err(vdev, "Survivability error detected\n"); -} - -const struct ivpu_hw_ops ivpu_hw_40xx_ops = { - .info_init = ivpu_hw_40xx_info_init, - .power_up = ivpu_hw_40xx_power_up, - .is_idle = ivpu_hw_40xx_is_idle, - .wait_for_idle = ivpu_hw_40xx_wait_for_idle, - .power_down = ivpu_hw_40xx_power_down, - .reset = ivpu_hw_40xx_reset, - .boot_fw = ivpu_hw_40xx_boot_fw, - .wdt_disable = ivpu_hw_40xx_wdt_disable, - .diagnose_failure = ivpu_hw_40xx_diagnose_failure, - .profiling_freq_get = ivpu_hw_40xx_profiling_freq_get, - .profiling_freq_drive = ivpu_hw_40xx_profiling_freq_drive, - .reg_pll_freq_get = ivpu_hw_40xx_reg_pll_freq_get, - .ratio_to_freq = ivpu_hw_40xx_ratio_to_freq, - .reg_telemetry_offset_get = ivpu_hw_40xx_reg_telemetry_offset_get, - .reg_telemetry_size_get = ivpu_hw_40xx_reg_telemetry_size_get, - .reg_telemetry_enable_get = ivpu_hw_40xx_reg_telemetry_enable_get, - .reg_db_set = ivpu_hw_40xx_reg_db_set, - .reg_ipc_rx_addr_get = ivpu_hw_40xx_reg_ipc_rx_addr_get, - .reg_ipc_rx_count_get = ivpu_hw_40xx_reg_ipc_rx_count_get, - .reg_ipc_tx_set = ivpu_hw_40xx_reg_ipc_tx_set, - .irq_clear = ivpu_hw_40xx_irq_clear, - .irq_enable = ivpu_hw_40xx_irq_enable, - .irq_disable = ivpu_hw_40xx_irq_disable, - .irq_handler = ivpu_hw_40xx_irq_handler, -}; diff --git a/drivers/accel/ivpu/ivpu_hw_btrs.c b/drivers/accel/ivpu/ivpu_hw_btrs.c new file mode 100644 index 000000000000..13734d1abc7d --- /dev/null +++ b/drivers/accel/ivpu/ivpu_hw_btrs.c @@ -0,0 +1,881 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020-2024 Intel Corporation + */ + +#include "ivpu_drv.h" +#include "ivpu_hw.h" +#include "ivpu_hw_btrs.h" +#include "ivpu_hw_btrs_lnl_reg.h" +#include "ivpu_hw_btrs_mtl_reg.h" +#include "ivpu_hw_reg_io.h" +#include "ivpu_pm.h" + +#define BTRS_MTL_IRQ_MASK ((REG_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, ATS_ERR)) | \ + (REG_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, UFI_ERR))) + +#define BTRS_LNL_IRQ_MASK ((REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, ATS_ERR)) | \ + (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI0_ERR)) | \ + (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI1_ERR)) | \ + (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR0_ERR)) | \ + (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR1_ERR)) | \ + (REG_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR))) + +#define BTRS_MTL_ALL_IRQ_MASK (BTRS_MTL_IRQ_MASK | (REG_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, \ + FREQ_CHANGE))) + +#define BTRS_IRQ_DISABLE_MASK ((u32)-1) + +#define BTRS_LNL_ALL_IRQ_MASK ((u32)-1) + +#define BTRS_MTL_WP_CONFIG_1_TILE_5_3_RATIO WP_CONFIG(MTL_CONFIG_1_TILE, MTL_PLL_RATIO_5_3) +#define BTRS_MTL_WP_CONFIG_1_TILE_4_3_RATIO WP_CONFIG(MTL_CONFIG_1_TILE, MTL_PLL_RATIO_4_3) +#define BTRS_MTL_WP_CONFIG_2_TILE_5_3_RATIO WP_CONFIG(MTL_CONFIG_2_TILE, MTL_PLL_RATIO_5_3) +#define BTRS_MTL_WP_CONFIG_2_TILE_4_3_RATIO WP_CONFIG(MTL_CONFIG_2_TILE, MTL_PLL_RATIO_4_3) +#define BTRS_MTL_WP_CONFIG_0_TILE_PLL_OFF WP_CONFIG(0, 0) + +#define PLL_CDYN_DEFAULT 0x80 +#define PLL_EPP_DEFAULT 0x80 +#define PLL_CONFIG_DEFAULT 0x0 +#define PLL_SIMULATION_FREQ 10000000 +#define PLL_REF_CLK_FREQ 50000000 +#define PLL_TIMEOUT_US (1500 * USEC_PER_MSEC) +#define IDLE_TIMEOUT_US (5 * USEC_PER_MSEC) +#define TIMEOUT_US (150 * USEC_PER_MSEC) + +/* Work point configuration values */ +#define WP_CONFIG(tile, ratio) (((tile) << 8) | (ratio)) +#define MTL_CONFIG_1_TILE 0x01 +#define MTL_CONFIG_2_TILE 0x02 +#define MTL_PLL_RATIO_5_3 0x01 +#define MTL_PLL_RATIO_4_3 0x02 +#define BTRS_MTL_TILE_FUSE_ENABLE_BOTH 0x0 +#define BTRS_MTL_TILE_SKU_BOTH 0x3630 + +#define BTRS_LNL_TILE_MAX_NUM 6 +#define BTRS_LNL_TILE_MAX_MASK 0x3f + +#define WEIGHTS_DEFAULT 0xf711f711u +#define WEIGHTS_ATS_DEFAULT 0x0000f711u + +#define DCT_REQ 0x2 +#define DCT_ENABLE 0x1 +#define DCT_DISABLE 0x0 + +int ivpu_hw_btrs_irqs_clear_with_0_mtl(struct ivpu_device *vdev) +{ + REGB_WR32(VPU_HW_BTRS_MTL_INTERRUPT_STAT, BTRS_MTL_ALL_IRQ_MASK); + if (REGB_RD32(VPU_HW_BTRS_MTL_INTERRUPT_STAT) == BTRS_MTL_ALL_IRQ_MASK) { + /* Writing 1s does not clear the interrupt status register */ + REGB_WR32(VPU_HW_BTRS_MTL_INTERRUPT_STAT, 0x0); + return true; + } + + return false; +} + +static void freq_ratios_init_mtl(struct ivpu_device *vdev) +{ + struct ivpu_hw_info *hw = vdev->hw; + u32 fmin_fuse, fmax_fuse; + + fmin_fuse = REGB_RD32(VPU_HW_BTRS_MTL_FMIN_FUSE); + hw->pll.min_ratio = REG_GET_FLD(VPU_HW_BTRS_MTL_FMIN_FUSE, MIN_RATIO, fmin_fuse); + hw->pll.pn_ratio = REG_GET_FLD(VPU_HW_BTRS_MTL_FMIN_FUSE, PN_RATIO, fmin_fuse); + + fmax_fuse = REGB_RD32(VPU_HW_BTRS_MTL_FMAX_FUSE); + hw->pll.max_ratio = REG_GET_FLD(VPU_HW_BTRS_MTL_FMAX_FUSE, MAX_RATIO, fmax_fuse); +} + +static void freq_ratios_init_lnl(struct ivpu_device *vdev) +{ + struct ivpu_hw_info *hw = vdev->hw; + u32 fmin_fuse, fmax_fuse; + + fmin_fuse = REGB_RD32(VPU_HW_BTRS_LNL_FMIN_FUSE); + hw->pll.min_ratio = REG_GET_FLD(VPU_HW_BTRS_LNL_FMIN_FUSE, MIN_RATIO, fmin_fuse); + hw->pll.pn_ratio = REG_GET_FLD(VPU_HW_BTRS_LNL_FMIN_FUSE, PN_RATIO, fmin_fuse); + + fmax_fuse = REGB_RD32(VPU_HW_BTRS_LNL_FMAX_FUSE); + hw->pll.max_ratio = REG_GET_FLD(VPU_HW_BTRS_LNL_FMAX_FUSE, MAX_RATIO, fmax_fuse); +} + +void ivpu_hw_btrs_freq_ratios_init(struct ivpu_device *vdev) +{ + struct ivpu_hw_info *hw = vdev->hw; + + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + freq_ratios_init_mtl(vdev); + else + freq_ratios_init_lnl(vdev); + + hw->pll.min_ratio = clamp_t(u8, ivpu_pll_min_ratio, hw->pll.min_ratio, hw->pll.max_ratio); + hw->pll.max_ratio = clamp_t(u8, ivpu_pll_max_ratio, hw->pll.min_ratio, hw->pll.max_ratio); + hw->pll.pn_ratio = clamp_t(u8, hw->pll.pn_ratio, hw->pll.min_ratio, hw->pll.max_ratio); +} + +static bool tile_disable_check(u32 config) +{ + /* Allowed values: 0 or one bit from range 0-5 (6 tiles) */ + if (config == 0) + return true; + + if (config > BIT(BTRS_LNL_TILE_MAX_NUM - 1)) + return false; + + if ((config & (config - 1)) == 0) + return true; + + return false; +} + +static int read_tile_config_fuse(struct ivpu_device *vdev, u32 *tile_fuse_config) +{ + u32 fuse; + u32 config; + + fuse = REGB_RD32(VPU_HW_BTRS_LNL_TILE_FUSE); + if (!REG_TEST_FLD(VPU_HW_BTRS_LNL_TILE_FUSE, VALID, fuse)) { + ivpu_err(vdev, "Fuse: invalid (0x%x)\n", fuse); + return -EIO; + } + + config = REG_GET_FLD(VPU_HW_BTRS_LNL_TILE_FUSE, CONFIG, fuse); + if (!tile_disable_check(config)) { + ivpu_err(vdev, "Fuse: Invalid tile disable config (0x%x)\n", config); + return -EIO; + } + + if (config) + ivpu_dbg(vdev, MISC, "Fuse: %d tiles enabled. Tile number %d disabled\n", + BTRS_LNL_TILE_MAX_NUM - 1, ffs(config) - 1); + else + ivpu_dbg(vdev, MISC, "Fuse: All %d tiles enabled\n", BTRS_LNL_TILE_MAX_NUM); + + *tile_fuse_config = config; + return 0; +} + +static int info_init_mtl(struct ivpu_device *vdev) +{ + struct ivpu_hw_info *hw = vdev->hw; + + hw->tile_fuse = BTRS_MTL_TILE_FUSE_ENABLE_BOTH; + hw->sku = BTRS_MTL_TILE_SKU_BOTH; + hw->config = BTRS_MTL_WP_CONFIG_2_TILE_4_3_RATIO; + hw->sched_mode = ivpu_sched_mode; + + return 0; +} + +static int info_init_lnl(struct ivpu_device *vdev) +{ + struct ivpu_hw_info *hw = vdev->hw; + u32 tile_fuse_config; + int ret; + + ret = read_tile_config_fuse(vdev, &tile_fuse_config); + if (ret) + return ret; + + hw->sched_mode = ivpu_sched_mode; + hw->tile_fuse = tile_fuse_config; + hw->pll.profiling_freq = PLL_PROFILING_FREQ_DEFAULT; + + return 0; +} + +int ivpu_hw_btrs_info_init(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return info_init_mtl(vdev); + else + return info_init_lnl(vdev); +} + +static int wp_request_sync(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return REGB_POLL_FLD(VPU_HW_BTRS_MTL_WP_REQ_CMD, SEND, 0, PLL_TIMEOUT_US); + else + return REGB_POLL_FLD(VPU_HW_BTRS_LNL_WP_REQ_CMD, SEND, 0, PLL_TIMEOUT_US); +} + +static int wait_for_status_ready(struct ivpu_device *vdev, bool enable) +{ + u32 exp_val = enable ? 0x1 : 0x0; + + if (IVPU_WA(punit_disabled)) + return 0; + + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_STATUS, READY, exp_val, PLL_TIMEOUT_US); + else + return REGB_POLL_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, READY, exp_val, PLL_TIMEOUT_US); +} + +struct wp_request { + u16 min; + u16 max; + u16 target; + u16 cfg; + u16 epp; + u16 cdyn; +}; + +static void wp_request_mtl(struct ivpu_device *vdev, struct wp_request *wp) +{ + u32 val; + + val = REGB_RD32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0, MIN_RATIO, wp->min, val); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0, MAX_RATIO, wp->max, val); + REGB_WR32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD0, val); + + val = REGB_RD32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1, TARGET_RATIO, wp->target, val); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1, EPP, PLL_EPP_DEFAULT, val); + REGB_WR32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD1, val); + + val = REGB_RD32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD2); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD2, CONFIG, wp->cfg, val); + REGB_WR32(VPU_HW_BTRS_MTL_WP_REQ_PAYLOAD2, val); + + val = REGB_RD32(VPU_HW_BTRS_MTL_WP_REQ_CMD); + val = REG_SET_FLD(VPU_HW_BTRS_MTL_WP_REQ_CMD, SEND, val); + REGB_WR32(VPU_HW_BTRS_MTL_WP_REQ_CMD, val); +} + +static void wp_request_lnl(struct ivpu_device *vdev, struct wp_request *wp) +{ + u32 val; + + val = REGB_RD32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0, MIN_RATIO, wp->min, val); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0, MAX_RATIO, wp->max, val); + REGB_WR32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD0, val); + + val = REGB_RD32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1, TARGET_RATIO, wp->target, val); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1, EPP, wp->epp, val); + REGB_WR32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD1, val); + + val = REGB_RD32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2, CONFIG, wp->cfg, val); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2, CDYN, wp->cdyn, val); + REGB_WR32(VPU_HW_BTRS_LNL_WP_REQ_PAYLOAD2, val); + + val = REGB_RD32(VPU_HW_BTRS_LNL_WP_REQ_CMD); + val = REG_SET_FLD(VPU_HW_BTRS_LNL_WP_REQ_CMD, SEND, val); + REGB_WR32(VPU_HW_BTRS_LNL_WP_REQ_CMD, val); +} + +static void wp_request(struct ivpu_device *vdev, struct wp_request *wp) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + wp_request_mtl(vdev, wp); + else + wp_request_lnl(vdev, wp); +} + +static int wp_request_send(struct ivpu_device *vdev, struct wp_request *wp) +{ + int ret; + + ret = wp_request_sync(vdev); + if (ret) { + ivpu_err(vdev, "Failed to sync before workpoint request: %d\n", ret); + return ret; + } + + wp_request(vdev, wp); + + ret = wp_request_sync(vdev); + if (ret) + ivpu_err(vdev, "Failed to sync after workpoint request: %d\n", ret); + + return ret; +} + +static void prepare_wp_request(struct ivpu_device *vdev, struct wp_request *wp, bool enable) +{ + struct ivpu_hw_info *hw = vdev->hw; + + wp->min = hw->pll.min_ratio; + wp->max = hw->pll.max_ratio; + + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) { + wp->target = enable ? hw->pll.pn_ratio : 0; + wp->cfg = enable ? hw->config : 0; + wp->cdyn = 0; + wp->epp = 0; + } else { + wp->target = hw->pll.pn_ratio; + wp->cfg = enable ? PLL_CONFIG_DEFAULT : 0; + wp->cdyn = enable ? PLL_CDYN_DEFAULT : 0; + wp->epp = enable ? PLL_EPP_DEFAULT : 0; + } + + /* Simics cannot start without at least one tile */ + if (enable && ivpu_is_simics(vdev)) + wp->cfg = 1; +} + +static int wait_for_pll_lock(struct ivpu_device *vdev, bool enable) +{ + u32 exp_val = enable ? 0x1 : 0x0; + + if (ivpu_hw_btrs_gen(vdev) != IVPU_HW_BTRS_MTL) + return 0; + + if (IVPU_WA(punit_disabled)) + return 0; + + return REGB_POLL_FLD(VPU_HW_BTRS_MTL_PLL_STATUS, LOCK, exp_val, PLL_TIMEOUT_US); +} + +int ivpu_hw_btrs_wp_drive(struct ivpu_device *vdev, bool enable) +{ + struct wp_request wp; + int ret; + + if (IVPU_WA(punit_disabled)) { + ivpu_dbg(vdev, PM, "Skipping workpoint request\n"); + return 0; + } + + prepare_wp_request(vdev, &wp, enable); + + ivpu_dbg(vdev, PM, "PLL workpoint request: %u Hz, config: 0x%x, epp: 0x%x, cdyn: 0x%x\n", + PLL_RATIO_TO_FREQ(wp.target), wp.cfg, wp.epp, wp.cdyn); + + ret = wp_request_send(vdev, &wp); + if (ret) { + ivpu_err(vdev, "Failed to send workpoint request: %d\n", ret); + return ret; + } + + ret = wait_for_pll_lock(vdev, enable); + if (ret) { + ivpu_err(vdev, "Timed out waiting for PLL lock\n"); + return ret; + } + + ret = wait_for_status_ready(vdev, enable); + if (ret) { + ivpu_err(vdev, "Timed out waiting for NPU ready status\n"); + return ret; + } + + return 0; +} + +static int d0i3_drive_mtl(struct ivpu_device *vdev, bool enable) +{ + int ret; + u32 val; + + ret = REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US); + if (ret) { + ivpu_err(vdev, "Failed to sync before D0i3 transition: %d\n", ret); + return ret; + } + + val = REGB_RD32(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL); + if (enable) + val = REG_SET_FLD(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, I3, val); + else + val = REG_CLR_FLD(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, I3, val); + REGB_WR32(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, val); + + ret = REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US); + if (ret) + ivpu_err(vdev, "Failed to sync after D0i3 transition: %d\n", ret); + + return ret; +} + +static int d0i3_drive_lnl(struct ivpu_device *vdev, bool enable) +{ + int ret; + u32 val; + + ret = REGB_POLL_FLD(VPU_HW_BTRS_LNL_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US); + if (ret) { + ivpu_err(vdev, "Failed to sync before D0i3 transition: %d\n", ret); + return ret; + } + + val = REGB_RD32(VPU_HW_BTRS_LNL_D0I3_CONTROL); + if (enable) + val = REG_SET_FLD(VPU_HW_BTRS_LNL_D0I3_CONTROL, I3, val); + else + val = REG_CLR_FLD(VPU_HW_BTRS_LNL_D0I3_CONTROL, I3, val); + REGB_WR32(VPU_HW_BTRS_LNL_D0I3_CONTROL, val); + + ret = REGB_POLL_FLD(VPU_HW_BTRS_LNL_D0I3_CONTROL, INPROGRESS, 0, TIMEOUT_US); + if (ret) { + ivpu_err(vdev, "Failed to sync after D0i3 transition: %d\n", ret); + return ret; + } + + return 0; +} + +static int d0i3_drive(struct ivpu_device *vdev, bool enable) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return d0i3_drive_mtl(vdev, enable); + else + return d0i3_drive_lnl(vdev, enable); +} + +int ivpu_hw_btrs_d0i3_enable(struct ivpu_device *vdev) +{ + int ret; + + if (IVPU_WA(punit_disabled)) + return 0; + + ret = d0i3_drive(vdev, true); + if (ret) + ivpu_err(vdev, "Failed to enable D0i3: %d\n", ret); + + udelay(5); /* VPU requires 5 us to complete the transition */ + + return ret; +} + +int ivpu_hw_btrs_d0i3_disable(struct ivpu_device *vdev) +{ + int ret; + + if (IVPU_WA(punit_disabled)) + return 0; + + ret = d0i3_drive(vdev, false); + if (ret) + ivpu_err(vdev, "Failed to disable D0i3: %d\n", ret); + + return ret; +} + +int ivpu_hw_btrs_wait_for_clock_res_own_ack(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return 0; + + if (ivpu_is_simics(vdev)) + return 0; + + return REGB_POLL_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, CLOCK_RESOURCE_OWN_ACK, 1, TIMEOUT_US); +} + +void ivpu_hw_btrs_set_port_arbitration_weights_lnl(struct ivpu_device *vdev) +{ + REGB_WR32(VPU_HW_BTRS_LNL_PORT_ARBITRATION_WEIGHTS, WEIGHTS_DEFAULT); + REGB_WR32(VPU_HW_BTRS_LNL_PORT_ARBITRATION_WEIGHTS_ATS, WEIGHTS_ATS_DEFAULT); +} + +static int ip_reset_mtl(struct ivpu_device *vdev) +{ + int ret; + u32 val; + + ret = REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US); + if (ret) { + ivpu_err(vdev, "Timed out waiting for TRIGGER bit\n"); + return ret; + } + + val = REGB_RD32(VPU_HW_BTRS_MTL_VPU_IP_RESET); + val = REG_SET_FLD(VPU_HW_BTRS_MTL_VPU_IP_RESET, TRIGGER, val); + REGB_WR32(VPU_HW_BTRS_MTL_VPU_IP_RESET, val); + + ret = REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_IP_RESET, TRIGGER, 0, TIMEOUT_US); + if (ret) + ivpu_err(vdev, "Timed out waiting for RESET completion\n"); + + return ret; +} + +static int ip_reset_lnl(struct ivpu_device *vdev) +{ + int ret; + u32 val; + + ret = REGB_POLL_FLD(VPU_HW_BTRS_LNL_IP_RESET, TRIGGER, 0, TIMEOUT_US); + if (ret) { + ivpu_err(vdev, "Wait for *_TRIGGER timed out\n"); + return ret; + } + + val = REGB_RD32(VPU_HW_BTRS_LNL_IP_RESET); + val = REG_SET_FLD(VPU_HW_BTRS_LNL_IP_RESET, TRIGGER, val); + REGB_WR32(VPU_HW_BTRS_LNL_IP_RESET, val); + + ret = REGB_POLL_FLD(VPU_HW_BTRS_LNL_IP_RESET, TRIGGER, 0, TIMEOUT_US); + if (ret) + ivpu_err(vdev, "Timed out waiting for RESET completion\n"); + + return ret; +} + +int ivpu_hw_btrs_ip_reset(struct ivpu_device *vdev) +{ + if (IVPU_WA(punit_disabled)) + return 0; + + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return ip_reset_mtl(vdev); + else + return ip_reset_lnl(vdev); +} + +void ivpu_hw_btrs_profiling_freq_reg_set_lnl(struct ivpu_device *vdev) +{ + u32 val = REGB_RD32(VPU_HW_BTRS_LNL_VPU_STATUS); + + if (vdev->hw->pll.profiling_freq == PLL_PROFILING_FREQ_DEFAULT) + val = REG_CLR_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, PERF_CLK, val); + else + val = REG_SET_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, PERF_CLK, val); + + REGB_WR32(VPU_HW_BTRS_LNL_VPU_STATUS, val); +} + +void ivpu_hw_btrs_ats_print_lnl(struct ivpu_device *vdev) +{ + ivpu_dbg(vdev, MISC, "Buttress ATS: %s\n", + REGB_RD32(VPU_HW_BTRS_LNL_HM_ATS) ? "Enable" : "Disable"); +} + +void ivpu_hw_btrs_clock_relinquish_disable_lnl(struct ivpu_device *vdev) +{ + u32 val = REGB_RD32(VPU_HW_BTRS_LNL_VPU_STATUS); + + val = REG_SET_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, DISABLE_CLK_RELINQUISH, val); + REGB_WR32(VPU_HW_BTRS_LNL_VPU_STATUS, val); +} + +bool ivpu_hw_btrs_is_idle(struct ivpu_device *vdev) +{ + u32 val; + + if (IVPU_WA(punit_disabled)) + return true; + + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) { + val = REGB_RD32(VPU_HW_BTRS_MTL_VPU_STATUS); + + return REG_TEST_FLD(VPU_HW_BTRS_MTL_VPU_STATUS, READY, val) && + REG_TEST_FLD(VPU_HW_BTRS_MTL_VPU_STATUS, IDLE, val); + } else { + val = REGB_RD32(VPU_HW_BTRS_LNL_VPU_STATUS); + + return REG_TEST_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, READY, val) && + REG_TEST_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, IDLE, val); + } +} + +int ivpu_hw_btrs_wait_for_idle(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return REGB_POLL_FLD(VPU_HW_BTRS_MTL_VPU_STATUS, IDLE, 0x1, IDLE_TIMEOUT_US); + else + return REGB_POLL_FLD(VPU_HW_BTRS_LNL_VPU_STATUS, IDLE, 0x1, IDLE_TIMEOUT_US); +} + +/* Handler for IRQs from Buttress core (irqB) */ +bool ivpu_hw_btrs_irq_handler_mtl(struct ivpu_device *vdev, int irq) +{ + u32 status = REGB_RD32(VPU_HW_BTRS_MTL_INTERRUPT_STAT) & BTRS_MTL_IRQ_MASK; + bool schedule_recovery = false; + + if (!status) + return false; + + if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, FREQ_CHANGE, status)) + ivpu_dbg(vdev, IRQ, "FREQ_CHANGE irq: %08x", + REGB_RD32(VPU_HW_BTRS_MTL_CURRENT_PLL)); + + if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, ATS_ERR, status)) { + ivpu_err(vdev, "ATS_ERR irq 0x%016llx", REGB_RD64(VPU_HW_BTRS_MTL_ATS_ERR_LOG_0)); + REGB_WR32(VPU_HW_BTRS_MTL_ATS_ERR_CLEAR, 0x1); + schedule_recovery = true; + } + + if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, UFI_ERR, status)) { + u32 ufi_log = REGB_RD32(VPU_HW_BTRS_MTL_UFI_ERR_LOG); + + ivpu_err(vdev, "UFI_ERR irq (0x%08x) opcode: 0x%02lx axi_id: 0x%02lx cq_id: 0x%03lx", + ufi_log, REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, OPCODE, ufi_log), + REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, AXI_ID, ufi_log), + REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, CQ_ID, ufi_log)); + REGB_WR32(VPU_HW_BTRS_MTL_UFI_ERR_CLEAR, 0x1); + schedule_recovery = true; + } + + /* This must be done after interrupts are cleared at the source. */ + if (IVPU_WA(interrupt_clear_with_0)) + /* + * Writing 1 triggers an interrupt, so we can't perform read update write. + * Clear local interrupt status by writing 0 to all bits. + */ + REGB_WR32(VPU_HW_BTRS_MTL_INTERRUPT_STAT, 0x0); + else + REGB_WR32(VPU_HW_BTRS_MTL_INTERRUPT_STAT, status); + + if (schedule_recovery) + ivpu_pm_trigger_recovery(vdev, "Buttress IRQ"); + + return true; +} + +/* Handler for IRQs from Buttress core (irqB) */ +bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device *vdev, int irq) +{ + u32 status = REGB_RD32(VPU_HW_BTRS_LNL_INTERRUPT_STAT) & BTRS_LNL_IRQ_MASK; + bool schedule_recovery = false; + + if (!status) + return false; + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, status)) + ivpu_dbg(vdev, IRQ, "Survivability IRQ\n"); + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status)) + ivpu_dbg(vdev, IRQ, "FREQ_CHANGE irq: %08x", REGB_RD32(VPU_HW_BTRS_LNL_PLL_FREQ)); + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, ATS_ERR, status)) { + ivpu_err(vdev, "ATS_ERR LOG1 0x%08x ATS_ERR_LOG2 0x%08x\n", + REGB_RD32(VPU_HW_BTRS_LNL_ATS_ERR_LOG1), + REGB_RD32(VPU_HW_BTRS_LNL_ATS_ERR_LOG2)); + REGB_WR32(VPU_HW_BTRS_LNL_ATS_ERR_CLEAR, 0x1); + schedule_recovery = true; + } + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI0_ERR, status)) { + ivpu_err(vdev, "CFI0_ERR 0x%08x", REGB_RD32(VPU_HW_BTRS_LNL_CFI0_ERR_LOG)); + REGB_WR32(VPU_HW_BTRS_LNL_CFI0_ERR_CLEAR, 0x1); + schedule_recovery = true; + } + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI1_ERR, status)) { + ivpu_err(vdev, "CFI1_ERR 0x%08x", REGB_RD32(VPU_HW_BTRS_LNL_CFI1_ERR_LOG)); + REGB_WR32(VPU_HW_BTRS_LNL_CFI1_ERR_CLEAR, 0x1); + schedule_recovery = true; + } + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR0_ERR, status)) { + ivpu_err(vdev, "IMR_ERR_CFI0 LOW: 0x%08x HIGH: 0x%08x", + REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_LOW), + REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_HIGH)); + REGB_WR32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_CLEAR, 0x1); + schedule_recovery = true; + } + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR1_ERR, status)) { + ivpu_err(vdev, "IMR_ERR_CFI1 LOW: 0x%08x HIGH: 0x%08x", + REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_LOW), + REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_HIGH)); + REGB_WR32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_CLEAR, 0x1); + schedule_recovery = true; + } + + /* This must be done after interrupts are cleared at the source. */ + REGB_WR32(VPU_HW_BTRS_LNL_INTERRUPT_STAT, status); + + if (schedule_recovery) + ivpu_pm_trigger_recovery(vdev, "Buttress IRQ"); + + return true; +} + +static void dct_drive_40xx(struct ivpu_device *vdev, u32 dct_val) +{ + u32 val = REGB_RD32(VPU_HW_BTRS_LNL_PCODE_MAILBOX); + + val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_PCODE_MAILBOX, CMD, DCT_REQ, val); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_PCODE_MAILBOX, PARAM1, + dct_val ? DCT_ENABLE : DCT_DISABLE, val); + val = REG_SET_FLD_NUM(VPU_HW_BTRS_LNL_PCODE_MAILBOX, PARAM2, dct_val, val); + + REGB_WR32(VPU_HW_BTRS_LNL_PCODE_MAILBOX, val); +} + +void ivpu_hw_btrs_dct_drive(struct ivpu_device *vdev, u32 dct_val) +{ + return dct_drive_40xx(vdev, dct_val); +} + +static u32 pll_ratio_to_freq_mtl(u32 ratio, u32 config) +{ + u32 pll_clock = PLL_REF_CLK_FREQ * ratio; + u32 cpu_clock; + + if ((config & 0xff) == MTL_PLL_RATIO_4_3) + cpu_clock = pll_clock * 2 / 4; + else + cpu_clock = pll_clock * 2 / 5; + + return cpu_clock; +} + +u32 ivpu_hw_btrs_ratio_to_freq(struct ivpu_device *vdev, u32 ratio) +{ + struct ivpu_hw_info *hw = vdev->hw; + + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return pll_ratio_to_freq_mtl(ratio, hw->config); + else + return PLL_RATIO_TO_FREQ(ratio); +} + +static u32 pll_freq_get_mtl(struct ivpu_device *vdev) +{ + u32 pll_curr_ratio; + + pll_curr_ratio = REGB_RD32(VPU_HW_BTRS_MTL_CURRENT_PLL); + pll_curr_ratio &= VPU_HW_BTRS_MTL_CURRENT_PLL_RATIO_MASK; + + if (!ivpu_is_silicon(vdev)) + return PLL_SIMULATION_FREQ; + + return pll_ratio_to_freq_mtl(pll_curr_ratio, vdev->hw->config); +} + +static u32 pll_freq_get_lnl(struct ivpu_device *vdev) +{ + u32 pll_curr_ratio; + + pll_curr_ratio = REGB_RD32(VPU_HW_BTRS_LNL_PLL_FREQ); + pll_curr_ratio &= VPU_HW_BTRS_LNL_PLL_FREQ_RATIO_MASK; + + return PLL_RATIO_TO_FREQ(pll_curr_ratio); +} + +u32 ivpu_hw_btrs_pll_freq_get(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return pll_freq_get_mtl(vdev); + else + return pll_freq_get_lnl(vdev); +} + +u32 ivpu_hw_btrs_telemetry_offset_get(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return REGB_RD32(VPU_HW_BTRS_MTL_VPU_TELEMETRY_OFFSET); + else + return REGB_RD32(VPU_HW_BTRS_LNL_VPU_TELEMETRY_OFFSET); +} + +u32 ivpu_hw_btrs_telemetry_size_get(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return REGB_RD32(VPU_HW_BTRS_MTL_VPU_TELEMETRY_SIZE); + else + return REGB_RD32(VPU_HW_BTRS_LNL_VPU_TELEMETRY_SIZE); +} + +u32 ivpu_hw_btrs_telemetry_enable_get(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return REGB_RD32(VPU_HW_BTRS_MTL_VPU_TELEMETRY_ENABLE); + else + return REGB_RD32(VPU_HW_BTRS_LNL_VPU_TELEMETRY_ENABLE); +} + +void ivpu_hw_btrs_global_int_disable(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + REGB_WR32(VPU_HW_BTRS_MTL_GLOBAL_INT_MASK, 0x1); + else + REGB_WR32(VPU_HW_BTRS_LNL_GLOBAL_INT_MASK, 0x1); +} + +void ivpu_hw_btrs_global_int_enable(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + REGB_WR32(VPU_HW_BTRS_MTL_GLOBAL_INT_MASK, 0x0); + else + REGB_WR32(VPU_HW_BTRS_LNL_GLOBAL_INT_MASK, 0x0); +} + +void ivpu_hw_btrs_irq_enable(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) { + REGB_WR32(VPU_HW_BTRS_MTL_LOCAL_INT_MASK, (u32)(~BTRS_MTL_IRQ_MASK)); + REGB_WR32(VPU_HW_BTRS_MTL_GLOBAL_INT_MASK, 0x0); + } else { + REGB_WR32(VPU_HW_BTRS_LNL_LOCAL_INT_MASK, (u32)(~BTRS_LNL_IRQ_MASK)); + REGB_WR32(VPU_HW_BTRS_LNL_GLOBAL_INT_MASK, 0x0); + } +} + +void ivpu_hw_btrs_irq_disable(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) { + REGB_WR32(VPU_HW_BTRS_MTL_GLOBAL_INT_MASK, 0x1); + REGB_WR32(VPU_HW_BTRS_MTL_LOCAL_INT_MASK, BTRS_IRQ_DISABLE_MASK); + } else { + REGB_WR32(VPU_HW_BTRS_LNL_GLOBAL_INT_MASK, 0x1); + REGB_WR32(VPU_HW_BTRS_LNL_LOCAL_INT_MASK, BTRS_IRQ_DISABLE_MASK); + } +} + +static void diagnose_failure_mtl(struct ivpu_device *vdev) +{ + u32 reg = REGB_RD32(VPU_HW_BTRS_MTL_INTERRUPT_STAT) & BTRS_MTL_IRQ_MASK; + + if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, ATS_ERR, reg)) + ivpu_err(vdev, "ATS_ERR irq 0x%016llx", REGB_RD64(VPU_HW_BTRS_MTL_ATS_ERR_LOG_0)); + + if (REG_TEST_FLD(VPU_HW_BTRS_MTL_INTERRUPT_STAT, UFI_ERR, reg)) { + u32 log = REGB_RD32(VPU_HW_BTRS_MTL_UFI_ERR_LOG); + + ivpu_err(vdev, "UFI_ERR irq (0x%08x) opcode: 0x%02lx axi_id: 0x%02lx cq_id: 0x%03lx", + log, REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, OPCODE, log), + REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, AXI_ID, log), + REG_GET_FLD(VPU_HW_BTRS_MTL_UFI_ERR_LOG, CQ_ID, log)); + } +} + +static void diagnose_failure_lnl(struct ivpu_device *vdev) +{ + u32 reg = REGB_RD32(VPU_HW_BTRS_MTL_INTERRUPT_STAT) & BTRS_LNL_IRQ_MASK; + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, ATS_ERR, reg)) { + ivpu_err(vdev, "ATS_ERR_LOG1 0x%08x ATS_ERR_LOG2 0x%08x\n", + REGB_RD32(VPU_HW_BTRS_LNL_ATS_ERR_LOG1), + REGB_RD32(VPU_HW_BTRS_LNL_ATS_ERR_LOG2)); + } + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI0_ERR, reg)) + ivpu_err(vdev, "CFI0_ERR_LOG 0x%08x\n", REGB_RD32(VPU_HW_BTRS_LNL_CFI0_ERR_LOG)); + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, CFI1_ERR, reg)) + ivpu_err(vdev, "CFI1_ERR_LOG 0x%08x\n", REGB_RD32(VPU_HW_BTRS_LNL_CFI1_ERR_LOG)); + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR0_ERR, reg)) + ivpu_err(vdev, "IMR_ERR_CFI0 LOW: 0x%08x HIGH: 0x%08x\n", + REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_LOW), + REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI0_HIGH)); + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, IMR1_ERR, reg)) + ivpu_err(vdev, "IMR_ERR_CFI1 LOW: 0x%08x HIGH: 0x%08x\n", + REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_LOW), + REGB_RD32(VPU_HW_BTRS_LNL_IMR_ERR_CFI1_HIGH)); + + if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, reg)) + ivpu_err(vdev, "Survivability IRQ\n"); +} + +void ivpu_hw_btrs_diagnose_failure(struct ivpu_device *vdev) +{ + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) + return diagnose_failure_mtl(vdev); + else + return diagnose_failure_lnl(vdev); +} diff --git a/drivers/accel/ivpu/ivpu_hw_btrs.h b/drivers/accel/ivpu/ivpu_hw_btrs.h new file mode 100644 index 000000000000..b3e3ae2aa578 --- /dev/null +++ b/drivers/accel/ivpu/ivpu_hw_btrs.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020-2024 Intel Corporation + */ + +#ifndef __IVPU_HW_BTRS_H__ +#define __IVPU_HW_BTRS_H__ + +#include "ivpu_drv.h" +#include "ivpu_hw_37xx_reg.h" +#include "ivpu_hw_40xx_reg.h" +#include "ivpu_hw_reg_io.h" + +#define PLL_PROFILING_FREQ_DEFAULT 38400000 +#define PLL_PROFILING_FREQ_HIGH 400000000 +#define PLL_RATIO_TO_FREQ(x) ((x) * PLL_REF_CLK_FREQ) + +int ivpu_hw_btrs_info_init(struct ivpu_device *vdev); +void ivpu_hw_btrs_freq_ratios_init(struct ivpu_device *vdev); +int ivpu_hw_btrs_irqs_clear_with_0_mtl(struct ivpu_device *vdev); +int ivpu_hw_btrs_wp_drive(struct ivpu_device *vdev, bool enable); +int ivpu_hw_btrs_wait_for_clock_res_own_ack(struct ivpu_device *vdev); +int ivpu_hw_btrs_d0i3_enable(struct ivpu_device *vdev); +int ivpu_hw_btrs_d0i3_disable(struct ivpu_device *vdev); +void ivpu_hw_btrs_set_port_arbitration_weights_lnl(struct ivpu_device *vdev); +bool ivpu_hw_btrs_is_idle(struct ivpu_device *vdev); +int ivpu_hw_btrs_wait_for_idle(struct ivpu_device *vdev); +int ivpu_hw_btrs_ip_reset(struct ivpu_device *vdev); +void ivpu_hw_btrs_profiling_freq_reg_set_lnl(struct ivpu_device *vdev); +void ivpu_hw_btrs_ats_print_lnl(struct ivpu_device *vdev); +void ivpu_hw_btrs_clock_relinquish_disable_lnl(struct ivpu_device *vdev); +bool ivpu_hw_btrs_irq_handler_mtl(struct ivpu_device *vdev, int irq); +bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device *vdev, int irq); +void ivpu_hw_btrs_dct_drive(struct ivpu_device *vdev, u32 dct_val); +u32 ivpu_hw_btrs_pll_freq_get(struct ivpu_device *vdev); +u32 ivpu_hw_btrs_ratio_to_freq(struct ivpu_device *vdev, u32 ratio); +u32 ivpu_hw_btrs_telemetry_offset_get(struct ivpu_device *vdev); +u32 ivpu_hw_btrs_telemetry_size_get(struct ivpu_device *vdev); +u32 ivpu_hw_btrs_telemetry_enable_get(struct ivpu_device *vdev); +void ivpu_hw_btrs_global_int_enable(struct ivpu_device *vdev); +void ivpu_hw_btrs_global_int_disable(struct ivpu_device *vdev); +void ivpu_hw_btrs_irq_enable(struct ivpu_device *vdev); +void ivpu_hw_btrs_irq_disable(struct ivpu_device *vdev); +void ivpu_hw_btrs_diagnose_failure(struct ivpu_device *vdev); + +#endif /* __IVPU_HW_BTRS_H__ */ diff --git a/drivers/accel/ivpu/ivpu_hw_ip.c b/drivers/accel/ivpu/ivpu_hw_ip.c new file mode 100644 index 000000000000..00f68e2fcb3f --- /dev/null +++ b/drivers/accel/ivpu/ivpu_hw_ip.c @@ -0,0 +1,1174 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2020-2024 Intel Corporation + */ + +#include "ivpu_drv.h" +#include "ivpu_fw.h" +#include "ivpu_hw.h" +#include "ivpu_hw_37xx_reg.h" +#include "ivpu_hw_40xx_reg.h" +#include "ivpu_hw_ip.h" +#include "ivpu_hw_reg_io.h" +#include "ivpu_mmu.h" +#include "ivpu_pm.h" + +#define PWR_ISLAND_EN_POST_DLY_FREQ_DEFAULT 0 +#define PWR_ISLAND_EN_POST_DLY_FREQ_HIGH 18 +#define PWR_ISLAND_STATUS_DLY_FREQ_DEFAULT 3 +#define PWR_ISLAND_STATUS_DLY_FREQ_HIGH 46 +#define PWR_ISLAND_STATUS_TIMEOUT_US (5 * USEC_PER_MSEC) + +#define TIM_SAFE_ENABLE 0xf1d0dead +#define TIM_WATCHDOG_RESET_VALUE 0xffffffff + +#define ICB_0_IRQ_MASK_37XX ((REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT)) | \ + (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT)) | \ + (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT)) | \ + (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT)) | \ + (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT)) | \ + (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT)) | \ + (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT))) + +#define ICB_1_IRQ_MASK_37XX ((REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_2_INT)) | \ + (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_3_INT)) | \ + (REG_FLD(VPU_37XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_4_INT))) + +#define ICB_0_1_IRQ_MASK_37XX ((((u64)ICB_1_IRQ_MASK_37XX) << 32) | ICB_0_IRQ_MASK_37XX) + +#define ICB_0_IRQ_MASK_40XX ((REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT)) | \ + (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT)) | \ + (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT)) | \ + (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT)) | \ + (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT)) | \ + (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT)) | \ + (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT))) + +#define ICB_1_IRQ_MASK_40XX ((REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_2_INT)) | \ + (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_3_INT)) | \ + (REG_FLD(VPU_40XX_HOST_SS_ICB_STATUS_1, CPU_INT_REDIRECT_4_INT))) + +#define ICB_0_1_IRQ_MASK_40XX ((((u64)ICB_1_IRQ_MASK_40XX) << 32) | ICB_0_IRQ_MASK_40XX) + +#define ITF_FIREWALL_VIOLATION_MASK_37XX ((REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, CSS_ROM_CMX)) | \ + (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, CSS_DBG)) | \ + (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, CSS_CTRL)) | \ + (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, DEC400)) | \ + (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, MSS_NCE)) | \ + (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI)) | \ + (REG_FLD(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI_CMX))) + +#define ITF_FIREWALL_VIOLATION_MASK_40XX ((REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, CSS_ROM_CMX)) | \ + (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, CSS_DBG)) | \ + (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, CSS_CTRL)) | \ + (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, DEC400)) | \ + (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, MSS_NCE)) | \ + (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI)) | \ + (REG_FLD(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, MSS_MBI_CMX))) + +static int wait_for_ip_bar(struct ivpu_device *vdev) +{ + return REGV_POLL_FLD(VPU_37XX_HOST_SS_CPR_RST_CLR, AON, 0, 100); +} + +static void host_ss_rst_clr(struct ivpu_device *vdev) +{ + u32 val = 0; + + val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_CLR, TOP_NOC, val); + val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_CLR, DSS_MAS, val); + val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_CLR, MSS_MAS, val); + + REGV_WR32(VPU_37XX_HOST_SS_CPR_RST_CLR, val); +} + +static int host_ss_noc_qreqn_check_37xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_NOC_QREQN); + + if (!REG_TEST_FLD_NUM(VPU_37XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, exp_val, val)) + return -EIO; + + return 0; +} + +static int host_ss_noc_qreqn_check_40xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_NOC_QREQN); + + if (!REG_TEST_FLD_NUM(VPU_40XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, exp_val, val)) + return -EIO; + + return 0; +} + +static int host_ss_noc_qreqn_check(struct ivpu_device *vdev, u32 exp_val) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return host_ss_noc_qreqn_check_37xx(vdev, exp_val); + else + return host_ss_noc_qreqn_check_40xx(vdev, exp_val); +} + +static int host_ss_noc_qacceptn_check_37xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_NOC_QACCEPTN); + + if (!REG_TEST_FLD_NUM(VPU_37XX_HOST_SS_NOC_QACCEPTN, TOP_SOCMMIO, exp_val, val)) + return -EIO; + + return 0; +} + +static int host_ss_noc_qacceptn_check_40xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_NOC_QACCEPTN); + + if (!REG_TEST_FLD_NUM(VPU_40XX_HOST_SS_NOC_QACCEPTN, TOP_SOCMMIO, exp_val, val)) + return -EIO; + + return 0; +} + +static int host_ss_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_val) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return host_ss_noc_qacceptn_check_37xx(vdev, exp_val); + else + return host_ss_noc_qacceptn_check_40xx(vdev, exp_val); +} + +static int host_ss_noc_qdeny_check_37xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_NOC_QDENY); + + if (!REG_TEST_FLD_NUM(VPU_37XX_HOST_SS_NOC_QDENY, TOP_SOCMMIO, exp_val, val)) + return -EIO; + + return 0; +} + +static int host_ss_noc_qdeny_check_40xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_NOC_QDENY); + + if (!REG_TEST_FLD_NUM(VPU_40XX_HOST_SS_NOC_QDENY, TOP_SOCMMIO, exp_val, val)) + return -EIO; + + return 0; +} + +static int host_ss_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return host_ss_noc_qdeny_check_37xx(vdev, exp_val); + else + return host_ss_noc_qdeny_check_40xx(vdev, exp_val); +} + +static int top_noc_qrenqn_check_37xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QREQN); + + if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, exp_val, val) || + !REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, exp_val, val)) + return -EIO; + + return 0; +} + +static int top_noc_qrenqn_check_40xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_40XX_TOP_NOC_QREQN); + + if (!REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QREQN, CPU_CTRL, exp_val, val) || + !REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, exp_val, val)) + return -EIO; + + return 0; +} + +static int top_noc_qreqn_check(struct ivpu_device *vdev, u32 exp_val) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return top_noc_qrenqn_check_37xx(vdev, exp_val); + else + return top_noc_qrenqn_check_40xx(vdev, exp_val); +} + +int ivpu_hw_ip_host_ss_configure(struct ivpu_device *vdev) +{ + int ret; + + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) { + ret = wait_for_ip_bar(vdev); + if (ret) { + ivpu_err(vdev, "Timed out waiting for NPU IP bar\n"); + return ret; + } + host_ss_rst_clr(vdev); + } + + ret = host_ss_noc_qreqn_check(vdev, 0x0); + if (ret) { + ivpu_err(vdev, "Failed qreqn check: %d\n", ret); + return ret; + } + + ret = host_ss_noc_qacceptn_check(vdev, 0x0); + if (ret) { + ivpu_err(vdev, "Failed qacceptn check: %d\n", ret); + return ret; + } + + ret = host_ss_noc_qdeny_check(vdev, 0x0); + if (ret) + ivpu_err(vdev, "Failed qdeny check %d\n", ret); + + return ret; +} + +static void idle_gen_drive_37xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_AON_VPU_IDLE_GEN); + + if (enable) + val = REG_SET_FLD(VPU_37XX_HOST_SS_AON_VPU_IDLE_GEN, EN, val); + else + val = REG_CLR_FLD(VPU_37XX_HOST_SS_AON_VPU_IDLE_GEN, EN, val); + + REGV_WR32(VPU_37XX_HOST_SS_AON_VPU_IDLE_GEN, val); +} + +static void idle_gen_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_AON_IDLE_GEN); + + if (enable) + val = REG_SET_FLD(VPU_40XX_HOST_SS_AON_IDLE_GEN, EN, val); + else + val = REG_CLR_FLD(VPU_40XX_HOST_SS_AON_IDLE_GEN, EN, val); + + REGV_WR32(VPU_40XX_HOST_SS_AON_IDLE_GEN, val); +} + +void ivpu_hw_ip_idle_gen_enable(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + idle_gen_drive_37xx(vdev, true); + else + idle_gen_drive_40xx(vdev, true); +} + +void ivpu_hw_ip_idle_gen_disable(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + idle_gen_drive_37xx(vdev, false); + else + idle_gen_drive_40xx(vdev, false); +} + +static void pwr_island_delay_set_50xx(struct ivpu_device *vdev) +{ + u32 val, post, status; + + if (vdev->hw->pll.profiling_freq == PLL_PROFILING_FREQ_DEFAULT) { + post = PWR_ISLAND_EN_POST_DLY_FREQ_DEFAULT; + status = PWR_ISLAND_STATUS_DLY_FREQ_DEFAULT; + } else { + post = PWR_ISLAND_EN_POST_DLY_FREQ_HIGH; + status = PWR_ISLAND_STATUS_DLY_FREQ_HIGH; + } + + val = REGV_RD32(VPU_50XX_HOST_SS_AON_PWR_ISLAND_EN_POST_DLY); + val = REG_SET_FLD_NUM(VPU_50XX_HOST_SS_AON_PWR_ISLAND_EN_POST_DLY, POST_DLY, post, val); + REGV_WR32(VPU_50XX_HOST_SS_AON_PWR_ISLAND_EN_POST_DLY, val); + + val = REGV_RD32(VPU_50XX_HOST_SS_AON_PWR_ISLAND_STATUS_DLY); + val = REG_SET_FLD_NUM(VPU_50XX_HOST_SS_AON_PWR_ISLAND_STATUS_DLY, STATUS_DLY, status, val); + REGV_WR32(VPU_50XX_HOST_SS_AON_PWR_ISLAND_STATUS_DLY, val); +} + +static void pwr_island_trickle_drive_37xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0); + + if (enable) + val = REG_SET_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, MSS_CPU, val); + else + val = REG_CLR_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, MSS_CPU, val); + + REGV_WR32(VPU_37XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, val); +} + +static void pwr_island_trickle_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0); + + if (enable) + val = REG_SET_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, CSS_CPU, val); + else + val = REG_CLR_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, CSS_CPU, val); + + REGV_WR32(VPU_40XX_HOST_SS_AON_PWR_ISLAND_TRICKLE_EN0, val); + + if (enable) + ndelay(500); +} + +static void pwr_island_drive_37xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_AON_PWR_ISLAND_EN0); + + if (enable) + val = REG_SET_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_EN0, CSS_CPU, val); + else + val = REG_CLR_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_EN0, CSS_CPU, val); + + REGV_WR32(VPU_40XX_HOST_SS_AON_PWR_ISLAND_EN0, val); + + if (!enable) + ndelay(500); +} + +static void pwr_island_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0); + + if (enable) + val = REG_SET_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0, MSS_CPU, val); + else + val = REG_CLR_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0, MSS_CPU, val); + + REGV_WR32(VPU_37XX_HOST_SS_AON_PWR_ISLAND_EN0, val); +} + +static void pwr_island_enable(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) { + pwr_island_trickle_drive_37xx(vdev, true); + pwr_island_drive_37xx(vdev, true); + } else { + pwr_island_trickle_drive_40xx(vdev, true); + pwr_island_drive_40xx(vdev, true); + } +} + +static int wait_for_pwr_island_status(struct ivpu_device *vdev, u32 exp_val) +{ + if (IVPU_WA(punit_disabled)) + return 0; + + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return REGV_POLL_FLD(VPU_37XX_HOST_SS_AON_PWR_ISLAND_STATUS0, MSS_CPU, exp_val, + PWR_ISLAND_STATUS_TIMEOUT_US); + else + return REGV_POLL_FLD(VPU_40XX_HOST_SS_AON_PWR_ISLAND_STATUS0, CSS_CPU, exp_val, + PWR_ISLAND_STATUS_TIMEOUT_US); +} + +static void pwr_island_isolation_drive_37xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_AON_PWR_ISO_EN0); + + if (enable) + val = REG_SET_FLD(VPU_37XX_HOST_SS_AON_PWR_ISO_EN0, MSS_CPU, val); + else + val = REG_CLR_FLD(VPU_37XX_HOST_SS_AON_PWR_ISO_EN0, MSS_CPU, val); + + REGV_WR32(VPU_37XX_HOST_SS_AON_PWR_ISO_EN0, val); +} + +static void pwr_island_isolation_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_AON_PWR_ISO_EN0); + + if (enable) + val = REG_SET_FLD(VPU_40XX_HOST_SS_AON_PWR_ISO_EN0, CSS_CPU, val); + else + val = REG_CLR_FLD(VPU_40XX_HOST_SS_AON_PWR_ISO_EN0, CSS_CPU, val); + + REGV_WR32(VPU_40XX_HOST_SS_AON_PWR_ISO_EN0, val); +} + +static void pwr_island_isolation_drive(struct ivpu_device *vdev, bool enable) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + pwr_island_isolation_drive_37xx(vdev, enable); + else + pwr_island_isolation_drive_40xx(vdev, enable); +} + +static void pwr_island_isolation_disable(struct ivpu_device *vdev) +{ + pwr_island_isolation_drive(vdev, false); +} + +static void host_ss_clk_drive_37xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_CPR_CLK_SET); + + if (enable) { + val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, TOP_NOC, val); + val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, DSS_MAS, val); + val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, MSS_MAS, val); + } else { + val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, TOP_NOC, val); + val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, DSS_MAS, val); + val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_CLK_SET, MSS_MAS, val); + } + + REGV_WR32(VPU_37XX_HOST_SS_CPR_CLK_SET, val); +} + +static void host_ss_clk_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_CPR_CLK_EN); + + if (enable) { + val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, TOP_NOC, val); + val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, DSS_MAS, val); + val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, CSS_MAS, val); + } else { + val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, TOP_NOC, val); + val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, DSS_MAS, val); + val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_CLK_EN, CSS_MAS, val); + } + + REGV_WR32(VPU_40XX_HOST_SS_CPR_CLK_EN, val); +} + +static void host_ss_clk_drive(struct ivpu_device *vdev, bool enable) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + host_ss_clk_drive_37xx(vdev, enable); + else + host_ss_clk_drive_40xx(vdev, enable); +} + +static void host_ss_clk_enable(struct ivpu_device *vdev) +{ + host_ss_clk_drive(vdev, true); +} + +static void host_ss_rst_drive_37xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_CPR_RST_SET); + + if (enable) { + val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, TOP_NOC, val); + val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, DSS_MAS, val); + val = REG_SET_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, MSS_MAS, val); + } else { + val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, TOP_NOC, val); + val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, DSS_MAS, val); + val = REG_CLR_FLD(VPU_37XX_HOST_SS_CPR_RST_SET, MSS_MAS, val); + } + + REGV_WR32(VPU_37XX_HOST_SS_CPR_RST_SET, val); +} + +static void host_ss_rst_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_CPR_RST_EN); + + if (enable) { + val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, TOP_NOC, val); + val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, DSS_MAS, val); + val = REG_SET_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, CSS_MAS, val); + } else { + val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, TOP_NOC, val); + val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, DSS_MAS, val); + val = REG_CLR_FLD(VPU_40XX_HOST_SS_CPR_RST_EN, CSS_MAS, val); + } + + REGV_WR32(VPU_40XX_HOST_SS_CPR_RST_EN, val); +} + +static void host_ss_rst_drive(struct ivpu_device *vdev, bool enable) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + host_ss_rst_drive_37xx(vdev, enable); + else + host_ss_rst_drive_40xx(vdev, enable); +} + +static void host_ss_rst_enable(struct ivpu_device *vdev) +{ + host_ss_rst_drive(vdev, true); +} + +static void host_ss_noc_qreqn_top_socmmio_drive_37xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_NOC_QREQN); + + if (enable) + val = REG_SET_FLD(VPU_37XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val); + else + val = REG_CLR_FLD(VPU_37XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val); + REGV_WR32(VPU_37XX_HOST_SS_NOC_QREQN, val); +} + +static void host_ss_noc_qreqn_top_socmmio_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_SS_NOC_QREQN); + + if (enable) + val = REG_SET_FLD(VPU_40XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val); + else + val = REG_CLR_FLD(VPU_40XX_HOST_SS_NOC_QREQN, TOP_SOCMMIO, val); + REGV_WR32(VPU_40XX_HOST_SS_NOC_QREQN, val); +} + +static void host_ss_noc_qreqn_top_socmmio_drive(struct ivpu_device *vdev, bool enable) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + host_ss_noc_qreqn_top_socmmio_drive_37xx(vdev, enable); + else + host_ss_noc_qreqn_top_socmmio_drive_40xx(vdev, enable); +} + +static int host_ss_axi_drive(struct ivpu_device *vdev, bool enable) +{ + int ret; + + host_ss_noc_qreqn_top_socmmio_drive(vdev, enable); + + ret = host_ss_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0); + if (ret) { + ivpu_err(vdev, "Failed HOST SS NOC QACCEPTN check: %d\n", ret); + return ret; + } + + ret = host_ss_noc_qdeny_check(vdev, 0x0); + if (ret) + ivpu_err(vdev, "Failed HOST SS NOC QDENY check: %d\n", ret); + + return ret; +} + +static void top_noc_qreqn_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_40XX_TOP_NOC_QREQN); + + if (enable) { + val = REG_SET_FLD(VPU_40XX_TOP_NOC_QREQN, CPU_CTRL, val); + val = REG_SET_FLD(VPU_40XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); + } else { + val = REG_CLR_FLD(VPU_40XX_TOP_NOC_QREQN, CPU_CTRL, val); + val = REG_CLR_FLD(VPU_40XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); + } + + REGV_WR32(VPU_40XX_TOP_NOC_QREQN, val); +} + +static void top_noc_qreqn_drive_37xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QREQN); + + if (enable) { + val = REG_SET_FLD(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, val); + val = REG_SET_FLD(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); + } else { + val = REG_CLR_FLD(VPU_37XX_TOP_NOC_QREQN, CPU_CTRL, val); + val = REG_CLR_FLD(VPU_37XX_TOP_NOC_QREQN, HOSTIF_L2CACHE, val); + } + + REGV_WR32(VPU_37XX_TOP_NOC_QREQN, val); +} + +static void top_noc_qreqn_drive(struct ivpu_device *vdev, bool enable) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + top_noc_qreqn_drive_37xx(vdev, enable); + else + top_noc_qreqn_drive_40xx(vdev, enable); +} + +int ivpu_hw_ip_host_ss_axi_enable(struct ivpu_device *vdev) +{ + return host_ss_axi_drive(vdev, true); +} + +static int top_noc_qacceptn_check_37xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QACCEPTN); + + if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QACCEPTN, CPU_CTRL, exp_val, val) || + !REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QACCEPTN, HOSTIF_L2CACHE, exp_val, val)) + return -EIO; + + return 0; +} + +static int top_noc_qacceptn_check_40xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_40XX_TOP_NOC_QACCEPTN); + + if (!REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QACCEPTN, CPU_CTRL, exp_val, val) || + !REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QACCEPTN, HOSTIF_L2CACHE, exp_val, val)) + return -EIO; + + return 0; +} + +static int top_noc_qacceptn_check(struct ivpu_device *vdev, u32 exp_val) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return top_noc_qacceptn_check_37xx(vdev, exp_val); + else + return top_noc_qacceptn_check_40xx(vdev, exp_val); +} + +static int top_noc_qdeny_check_37xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_37XX_TOP_NOC_QDENY); + + if (!REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QDENY, CPU_CTRL, exp_val, val) || + !REG_TEST_FLD_NUM(VPU_37XX_TOP_NOC_QDENY, HOSTIF_L2CACHE, exp_val, val)) + return -EIO; + + return 0; +} + +static int top_noc_qdeny_check_40xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_40XX_TOP_NOC_QDENY); + + if (!REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QDENY, CPU_CTRL, exp_val, val) || + !REG_TEST_FLD_NUM(VPU_40XX_TOP_NOC_QDENY, HOSTIF_L2CACHE, exp_val, val)) + return -EIO; + + return 0; +} + +static int top_noc_qdeny_check(struct ivpu_device *vdev, u32 exp_val) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return top_noc_qdeny_check_37xx(vdev, exp_val); + else + return top_noc_qdeny_check_40xx(vdev, exp_val); +} + +static int top_noc_drive(struct ivpu_device *vdev, bool enable) +{ + int ret; + + top_noc_qreqn_drive(vdev, enable); + + ret = top_noc_qacceptn_check(vdev, enable ? 0x1 : 0x0); + if (ret) { + ivpu_err(vdev, "Failed TOP NOC QACCEPTN check: %d\n", ret); + return ret; + } + + ret = top_noc_qdeny_check(vdev, 0x0); + if (ret) + ivpu_err(vdev, "Failed TOP NOC QDENY check: %d\n", ret); + + return ret; +} + +int ivpu_hw_ip_top_noc_enable(struct ivpu_device *vdev) +{ + return top_noc_drive(vdev, true); +} + +static void dpu_active_drive_37xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_SS_AON_DPU_ACTIVE); + + if (enable) + val = REG_SET_FLD(VPU_37XX_HOST_SS_AON_DPU_ACTIVE, DPU_ACTIVE, val); + else + val = REG_CLR_FLD(VPU_37XX_HOST_SS_AON_DPU_ACTIVE, DPU_ACTIVE, val); + + REGV_WR32(VPU_37XX_HOST_SS_AON_DPU_ACTIVE, val); +} + +int ivpu_hw_ip_pwr_domain_enable(struct ivpu_device *vdev) +{ + int ret; + + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_50XX) + pwr_island_delay_set_50xx(vdev); + + pwr_island_enable(vdev); + + ret = wait_for_pwr_island_status(vdev, 0x1); + if (ret) { + ivpu_err(vdev, "Timed out waiting for power island status\n"); + return ret; + } + + ret = top_noc_qreqn_check(vdev, 0x0); + if (ret) { + ivpu_err(vdev, "Failed TOP NOC QREQN check %d\n", ret); + return ret; + } + + host_ss_clk_enable(vdev); + pwr_island_isolation_disable(vdev); + host_ss_rst_enable(vdev); + + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + dpu_active_drive_37xx(vdev, true); + + return ret; +} + +u64 ivpu_hw_ip_read_perf_timer_counter(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return REGV_RD64(VPU_37XX_CPU_SS_TIM_PERF_FREE_CNT); + else + return REGV_RD64(VPU_40XX_CPU_SS_TIM_PERF_EXT_FREE_CNT); +} + +static void ivpu_hw_ip_snoop_disable_37xx(struct ivpu_device *vdev) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES); + + val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, NOSNOOP_OVERRIDE_EN, val); + val = REG_CLR_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AW_NOSNOOP_OVERRIDE, val); + + if (ivpu_is_force_snoop_enabled(vdev)) + val = REG_CLR_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AR_NOSNOOP_OVERRIDE, val); + else + val = REG_SET_FLD(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, AR_NOSNOOP_OVERRIDE, val); + + REGV_WR32(VPU_37XX_HOST_IF_TCU_PTW_OVERRIDES, val); +} + +static void ivpu_hw_ip_snoop_disable_40xx(struct ivpu_device *vdev) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES); + + val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, SNOOP_OVERRIDE_EN, val); + val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AW_SNOOP_OVERRIDE, val); + + if (ivpu_is_force_snoop_enabled(vdev)) + val = REG_SET_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AR_SNOOP_OVERRIDE, val); + else + val = REG_CLR_FLD(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, AR_SNOOP_OVERRIDE, val); + + REGV_WR32(VPU_40XX_HOST_IF_TCU_PTW_OVERRIDES, val); +} + +void ivpu_hw_ip_snoop_disable(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return ivpu_hw_ip_snoop_disable_37xx(vdev); + else + return ivpu_hw_ip_snoop_disable_40xx(vdev); +} + +static void ivpu_hw_ip_tbu_mmu_enable_37xx(struct ivpu_device *vdev) +{ + u32 val = REGV_RD32(VPU_37XX_HOST_IF_TBU_MMUSSIDV); + + val = REG_SET_FLD(VPU_37XX_HOST_IF_TBU_MMUSSIDV, TBU0_AWMMUSSIDV, val); + val = REG_SET_FLD(VPU_37XX_HOST_IF_TBU_MMUSSIDV, TBU0_ARMMUSSIDV, val); + val = REG_SET_FLD(VPU_37XX_HOST_IF_TBU_MMUSSIDV, TBU2_AWMMUSSIDV, val); + val = REG_SET_FLD(VPU_37XX_HOST_IF_TBU_MMUSSIDV, TBU2_ARMMUSSIDV, val); + + REGV_WR32(VPU_37XX_HOST_IF_TBU_MMUSSIDV, val); +} + +static void ivpu_hw_ip_tbu_mmu_enable_40xx(struct ivpu_device *vdev) +{ + u32 val = REGV_RD32(VPU_40XX_HOST_IF_TBU_MMUSSIDV); + + val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU0_AWMMUSSIDV, val); + val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU0_ARMMUSSIDV, val); + val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU1_AWMMUSSIDV, val); + val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU1_ARMMUSSIDV, val); + val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU2_AWMMUSSIDV, val); + val = REG_SET_FLD(VPU_40XX_HOST_IF_TBU_MMUSSIDV, TBU2_ARMMUSSIDV, val); + + REGV_WR32(VPU_40XX_HOST_IF_TBU_MMUSSIDV, val); +} + +void ivpu_hw_ip_tbu_mmu_enable(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return ivpu_hw_ip_tbu_mmu_enable_37xx(vdev); + else + return ivpu_hw_ip_tbu_mmu_enable_40xx(vdev); +} + +static int soc_cpu_boot_37xx(struct ivpu_device *vdev) +{ + u32 val; + + val = REGV_RD32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC); + val = REG_SET_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTRUN0, val); + + val = REG_CLR_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RSTVEC, val); + REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); + + val = REG_SET_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val); + REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); + + val = REG_CLR_FLD(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, IRQI_RESUME0, val); + REGV_WR32(VPU_37XX_CPU_SS_MSSCPU_CPR_LEON_RT_VEC, val); + + val = vdev->fw->entry_point >> 9; + REGV_WR32(VPU_37XX_HOST_SS_LOADING_ADDRESS_LO, val); + + val = REG_SET_FLD(VPU_37XX_HOST_SS_LOADING_ADDRESS_LO, DONE, val); + REGV_WR32(VPU_37XX_HOST_SS_LOADING_ADDRESS_LO, val); + + ivpu_dbg(vdev, PM, "Booting firmware, mode: %s\n", + vdev->fw->entry_point == vdev->fw->cold_boot_entry_point ? "cold boot" : "resume"); + + return 0; +} + +static int cpu_noc_qacceptn_check_40xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_40XX_CPU_SS_CPR_NOC_QACCEPTN); + + if (!REG_TEST_FLD_NUM(VPU_40XX_CPU_SS_CPR_NOC_QACCEPTN, TOP_MMIO, exp_val, val)) + return -EIO; + + return 0; +} + +static int cpu_noc_qdeny_check_40xx(struct ivpu_device *vdev, u32 exp_val) +{ + u32 val = REGV_RD32(VPU_40XX_CPU_SS_CPR_NOC_QDENY); + + if (!REG_TEST_FLD_NUM(VPU_40XX_CPU_SS_CPR_NOC_QDENY, TOP_MMIO, exp_val, val)) + return -EIO; + + return 0; +} + +static void cpu_noc_top_mmio_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + u32 val = REGV_RD32(VPU_40XX_CPU_SS_CPR_NOC_QREQN); + + if (enable) + val = REG_SET_FLD(VPU_40XX_CPU_SS_CPR_NOC_QREQN, TOP_MMIO, val); + else + val = REG_CLR_FLD(VPU_40XX_CPU_SS_CPR_NOC_QREQN, TOP_MMIO, val); + REGV_WR32(VPU_40XX_CPU_SS_CPR_NOC_QREQN, val); +} + +static int soc_cpu_drive_40xx(struct ivpu_device *vdev, bool enable) +{ + int ret; + + cpu_noc_top_mmio_drive_40xx(vdev, enable); + + ret = cpu_noc_qacceptn_check_40xx(vdev, enable ? 0x1 : 0x0); + if (ret) { + ivpu_err(vdev, "Failed qacceptn check: %d\n", ret); + return ret; + } + + ret = cpu_noc_qdeny_check_40xx(vdev, 0x0); + if (ret) + ivpu_err(vdev, "Failed qdeny check: %d\n", ret); + + return ret; +} + +static int soc_cpu_enable(struct ivpu_device *vdev) +{ + return soc_cpu_drive_40xx(vdev, true); +} + +static int soc_cpu_boot_40xx(struct ivpu_device *vdev) +{ + int ret; + u32 val; + u64 val64; + + ret = soc_cpu_enable(vdev); + if (ret) { + ivpu_err(vdev, "Failed to enable SOC CPU: %d\n", ret); + return ret; + } + + val64 = vdev->fw->entry_point; + val64 <<= ffs(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO_IMAGE_LOCATION_MASK) - 1; + REGV_WR64(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO, val64); + + val = REGV_RD32(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO); + val = REG_SET_FLD(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO, DONE, val); + REGV_WR32(VPU_40XX_HOST_SS_VERIFICATION_ADDRESS_LO, val); + + ivpu_dbg(vdev, PM, "Booting firmware, mode: %s\n", + ivpu_fw_is_cold_boot(vdev) ? "cold boot" : "resume"); + + return 0; +} + +int ivpu_hw_ip_soc_cpu_boot(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return soc_cpu_boot_37xx(vdev); + else + return soc_cpu_boot_40xx(vdev); +} + +static void wdt_disable_37xx(struct ivpu_device *vdev) +{ + u32 val; + + /* Enable writing and set non-zero WDT value */ + REGV_WR32(VPU_37XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); + REGV_WR32(VPU_37XX_CPU_SS_TIM_WATCHDOG, TIM_WATCHDOG_RESET_VALUE); + + /* Enable writing and disable watchdog timer */ + REGV_WR32(VPU_37XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); + REGV_WR32(VPU_37XX_CPU_SS_TIM_WDOG_EN, 0); + + /* Now clear the timeout interrupt */ + val = REGV_RD32(VPU_37XX_CPU_SS_TIM_GEN_CONFIG); + val = REG_CLR_FLD(VPU_37XX_CPU_SS_TIM_GEN_CONFIG, WDOG_TO_INT_CLR, val); + REGV_WR32(VPU_37XX_CPU_SS_TIM_GEN_CONFIG, val); +} + +static void wdt_disable_40xx(struct ivpu_device *vdev) +{ + u32 val; + + REGV_WR32(VPU_40XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); + REGV_WR32(VPU_40XX_CPU_SS_TIM_WATCHDOG, TIM_WATCHDOG_RESET_VALUE); + + REGV_WR32(VPU_40XX_CPU_SS_TIM_SAFE, TIM_SAFE_ENABLE); + REGV_WR32(VPU_40XX_CPU_SS_TIM_WDOG_EN, 0); + + val = REGV_RD32(VPU_40XX_CPU_SS_TIM_GEN_CONFIG); + val = REG_CLR_FLD(VPU_40XX_CPU_SS_TIM_GEN_CONFIG, WDOG_TO_INT_CLR, val); + REGV_WR32(VPU_40XX_CPU_SS_TIM_GEN_CONFIG, val); +} + +void ivpu_hw_ip_wdt_disable(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return wdt_disable_37xx(vdev); + else + return wdt_disable_40xx(vdev); +} + +static u32 ipc_rx_count_get_37xx(struct ivpu_device *vdev) +{ + u32 count = REGV_RD32_SILENT(VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT); + + return REG_GET_FLD(VPU_37XX_HOST_SS_TIM_IPC_FIFO_STAT, FILL_LEVEL, count); +} + +static u32 ipc_rx_count_get_40xx(struct ivpu_device *vdev) +{ + u32 count = REGV_RD32_SILENT(VPU_40XX_HOST_SS_TIM_IPC_FIFO_STAT); + + return REG_GET_FLD(VPU_40XX_HOST_SS_TIM_IPC_FIFO_STAT, FILL_LEVEL, count); +} + +u32 ivpu_hw_ip_ipc_rx_count_get(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return ipc_rx_count_get_37xx(vdev); + else + return ipc_rx_count_get_40xx(vdev); +} + +void ivpu_hw_ip_irq_enable(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) { + REGV_WR32(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, ITF_FIREWALL_VIOLATION_MASK_37XX); + REGV_WR64(VPU_37XX_HOST_SS_ICB_ENABLE_0, ICB_0_1_IRQ_MASK_37XX); + } else { + REGV_WR32(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, ITF_FIREWALL_VIOLATION_MASK_40XX); + REGV_WR64(VPU_40XX_HOST_SS_ICB_ENABLE_0, ICB_0_1_IRQ_MASK_40XX); + } +} + +void ivpu_hw_ip_irq_disable(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) { + REGV_WR64(VPU_37XX_HOST_SS_ICB_ENABLE_0, 0x0ull); + REGV_WR32(VPU_37XX_HOST_SS_FW_SOC_IRQ_EN, 0x0); + } else { + REGV_WR64(VPU_40XX_HOST_SS_ICB_ENABLE_0, 0x0ull); + REGV_WR32(VPU_40XX_HOST_SS_FW_SOC_IRQ_EN, 0x0ul); + } +} + +static void diagnose_failure_37xx(struct ivpu_device *vdev) +{ + u32 reg = REGV_RD32(VPU_37XX_HOST_SS_ICB_STATUS_0) & ICB_0_IRQ_MASK_37XX; + + if (ipc_rx_count_get_37xx(vdev)) + ivpu_err(vdev, "IPC FIFO queue not empty, missed IPC IRQ"); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, reg)) + ivpu_err(vdev, "WDT MSS timeout detected\n"); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT, reg)) + ivpu_err(vdev, "WDT NCE timeout detected\n"); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT, reg)) + ivpu_err(vdev, "NOC Firewall irq detected\n"); +} + +static void diagnose_failure_40xx(struct ivpu_device *vdev) +{ + u32 reg = REGV_RD32(VPU_40XX_HOST_SS_ICB_STATUS_0) & ICB_0_IRQ_MASK_40XX; + + if (ipc_rx_count_get_40xx(vdev)) + ivpu_err(vdev, "IPC FIFO queue not empty, missed IPC IRQ"); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, reg)) + ivpu_err(vdev, "WDT MSS timeout detected\n"); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT, reg)) + ivpu_err(vdev, "WDT NCE timeout detected\n"); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT, reg)) + ivpu_err(vdev, "NOC Firewall irq detected\n"); +} + +void ivpu_hw_ip_diagnose_failure(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + diagnose_failure_37xx(vdev); + else + diagnose_failure_40xx(vdev); +} + +void ivpu_hw_ip_irq_clear(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + REGV_WR64(VPU_37XX_HOST_SS_ICB_CLEAR_0, ICB_0_1_IRQ_MASK_37XX); + else + REGV_WR64(VPU_40XX_HOST_SS_ICB_CLEAR_0, ICB_0_1_IRQ_MASK_40XX); +} + +static void irq_wdt_nce_handler(struct ivpu_device *vdev) +{ + ivpu_pm_trigger_recovery(vdev, "WDT NCE IRQ"); +} + +static void irq_wdt_mss_handler(struct ivpu_device *vdev) +{ + ivpu_hw_ip_wdt_disable(vdev); + ivpu_pm_trigger_recovery(vdev, "WDT MSS IRQ"); +} + +static void irq_noc_firewall_handler(struct ivpu_device *vdev) +{ + ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ"); +} + +/* Handler for IRQs from NPU core */ +bool ivpu_hw_ip_irq_handler_37xx(struct ivpu_device *vdev, int irq, bool *wake_thread) +{ + u32 status = REGV_RD32(VPU_37XX_HOST_SS_ICB_STATUS_0) & ICB_0_IRQ_MASK_37XX; + + if (!status) + return false; + + REGV_WR32(VPU_37XX_HOST_SS_ICB_CLEAR_0, status); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT, status)) + ivpu_mmu_irq_evtq_handler(vdev); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT, status)) + ivpu_ipc_irq_handler(vdev, wake_thread); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT, status)) + ivpu_dbg(vdev, IRQ, "MMU sync complete\n"); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT, status)) + ivpu_mmu_irq_gerr_handler(vdev); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, status)) + irq_wdt_mss_handler(vdev); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT, status)) + irq_wdt_nce_handler(vdev); + + if (REG_TEST_FLD(VPU_37XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT, status)) + irq_noc_firewall_handler(vdev); + + return true; +} + +/* Handler for IRQs from NPU core */ +bool ivpu_hw_ip_irq_handler_40xx(struct ivpu_device *vdev, int irq, bool *wake_thread) +{ + u32 status = REGV_RD32(VPU_40XX_HOST_SS_ICB_STATUS_0) & ICB_0_IRQ_MASK_40XX; + + if (!status) + return false; + + REGV_WR32(VPU_40XX_HOST_SS_ICB_CLEAR_0, status); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_0_INT, status)) + ivpu_mmu_irq_evtq_handler(vdev); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, HOST_IPC_FIFO_INT, status)) + ivpu_ipc_irq_handler(vdev, wake_thread); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_1_INT, status)) + ivpu_dbg(vdev, IRQ, "MMU sync complete\n"); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, MMU_IRQ_2_INT, status)) + ivpu_mmu_irq_gerr_handler(vdev); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_0_INT, status)) + irq_wdt_mss_handler(vdev); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, CPU_INT_REDIRECT_1_INT, status)) + irq_wdt_nce_handler(vdev); + + if (REG_TEST_FLD(VPU_40XX_HOST_SS_ICB_STATUS_0, NOC_FIREWALL_INT, status)) + irq_noc_firewall_handler(vdev); + + return true; +} + +static void db_set_37xx(struct ivpu_device *vdev, u32 db_id) +{ + u32 reg_stride = VPU_37XX_CPU_SS_DOORBELL_1 - VPU_37XX_CPU_SS_DOORBELL_0; + u32 val = REG_FLD(VPU_37XX_CPU_SS_DOORBELL_0, SET); + + REGV_WR32I(VPU_37XX_CPU_SS_DOORBELL_0, reg_stride, db_id, val); +} + +static void db_set_40xx(struct ivpu_device *vdev, u32 db_id) +{ + u32 reg_stride = VPU_40XX_CPU_SS_DOORBELL_1 - VPU_40XX_CPU_SS_DOORBELL_0; + u32 val = REG_FLD(VPU_40XX_CPU_SS_DOORBELL_0, SET); + + REGV_WR32I(VPU_40XX_CPU_SS_DOORBELL_0, reg_stride, db_id, val); +} + +void ivpu_hw_ip_db_set(struct ivpu_device *vdev, u32 db_id) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + db_set_37xx(vdev, db_id); + else + db_set_40xx(vdev, db_id); +} + +u32 ivpu_hw_ip_ipc_rx_addr_get(struct ivpu_device *vdev) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + return REGV_RD32(VPU_37XX_HOST_SS_TIM_IPC_FIFO_ATM); + else + return REGV_RD32(VPU_40XX_HOST_SS_TIM_IPC_FIFO_ATM); +} + +void ivpu_hw_ip_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr) +{ + if (ivpu_hw_ip_gen(vdev) == IVPU_HW_IP_37XX) + REGV_WR32(VPU_37XX_CPU_SS_TIM_IPC_FIFO, vpu_addr); + else + REGV_WR32(VPU_40XX_CPU_SS_TIM_IPC_FIFO, vpu_addr); +} diff --git a/drivers/accel/ivpu/ivpu_hw_ip.h b/drivers/accel/ivpu/ivpu_hw_ip.h new file mode 100644 index 000000000000..5e44b3de87aa --- /dev/null +++ b/drivers/accel/ivpu/ivpu_hw_ip.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020-2024 Intel Corporation + */ + +#ifndef __IVPU_HW_IP_H__ +#define __IVPU_HW_IP_H__ + +#include "ivpu_drv.h" + +int ivpu_hw_ip_host_ss_configure(struct ivpu_device *vdev); +void ivpu_hw_ip_idle_gen_enable(struct ivpu_device *vdev); +void ivpu_hw_ip_idle_gen_disable(struct ivpu_device *vdev); +int ivpu_hw_ip_pwr_domain_enable(struct ivpu_device *vdev); +int ivpu_hw_ip_host_ss_axi_enable(struct ivpu_device *vdev); +int ivpu_hw_ip_top_noc_enable(struct ivpu_device *vdev); +u64 ivpu_hw_ip_read_perf_timer_counter(struct ivpu_device *vdev); +void ivpu_hw_ip_snoop_disable(struct ivpu_device *vdev); +void ivpu_hw_ip_tbu_mmu_enable(struct ivpu_device *vdev); +int ivpu_hw_ip_soc_cpu_boot(struct ivpu_device *vdev); +void ivpu_hw_ip_wdt_disable(struct ivpu_device *vdev); +void ivpu_hw_ip_diagnose_failure(struct ivpu_device *vdev); +u32 ivpu_hw_ip_ipc_rx_count_get(struct ivpu_device *vdev); +void ivpu_hw_ip_irq_clear(struct ivpu_device *vdev); +bool ivpu_hw_ip_irq_handler_37xx(struct ivpu_device *vdev, int irq, bool *wake_thread); +bool ivpu_hw_ip_irq_handler_40xx(struct ivpu_device *vdev, int irq, bool *wake_thread); +void ivpu_hw_ip_db_set(struct ivpu_device *vdev, u32 db_id); +u32 ivpu_hw_ip_ipc_rx_addr_get(struct ivpu_device *vdev); +void ivpu_hw_ip_ipc_tx_set(struct ivpu_device *vdev, u32 vpu_addr); +void ivpu_hw_ip_irq_enable(struct ivpu_device *vdev); +void ivpu_hw_ip_irq_disable(struct ivpu_device *vdev); +void ivpu_hw_ip_diagnose_failure(struct ivpu_device *vdev); +void ivpu_hw_ip_fabric_req_override_enable_50xx(struct ivpu_device *vdev); +void ivpu_hw_ip_fabric_req_override_disable_50xx(struct ivpu_device *vdev); + +#endif /* __IVPU_HW_IP_H__ */ diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c index 56ff067f63e2..82fe617e8146 100644 --- a/drivers/accel/ivpu/ivpu_ipc.c +++ b/drivers/accel/ivpu/ivpu_ipc.c @@ -129,7 +129,7 @@ static void ivpu_ipc_tx_release(struct ivpu_device *vdev, u32 vpu_addr) static void ivpu_ipc_tx(struct ivpu_device *vdev, u32 vpu_addr) { - ivpu_hw_reg_ipc_tx_set(vdev, vpu_addr); + ivpu_hw_ipc_tx_set(vdev, vpu_addr); } static void @@ -392,8 +392,8 @@ void ivpu_ipc_irq_handler(struct ivpu_device *vdev, bool *wake_thread) * Driver needs to purge all messages from IPC FIFO to clear IPC interrupt. * Without purge IPC FIFO to 0 next IPC interrupts won't be generated. */ - while (ivpu_hw_reg_ipc_rx_count_get(vdev)) { - vpu_addr = ivpu_hw_reg_ipc_rx_addr_get(vdev); + while (ivpu_hw_ipc_rx_count_get(vdev)) { + vpu_addr = ivpu_hw_ipc_rx_addr_get(vdev); if (vpu_addr == REG_IO_ERROR) { ivpu_err_ratelimited(vdev, "Failed to read IPC rx addr register\n"); return; diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c index 845181b48b3a..e4e24813fe03 100644 --- a/drivers/accel/ivpu/ivpu_job.c +++ b/drivers/accel/ivpu/ivpu_job.c @@ -27,7 +27,7 @@ static void ivpu_cmdq_ring_db(struct ivpu_device *vdev, struct ivpu_cmdq *cmdq) { - ivpu_hw_reg_db_set(vdev, cmdq->db_id); + ivpu_hw_db_set(vdev, cmdq->db_id); } static int ivpu_preemption_buffers_create(struct ivpu_device *vdev,