From patchwork Fri Jun 14 14:21:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9D76C27C6E for ; Fri, 14 Jun 2024 14:22:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=wOoqByIw0gXw+PwK/LYX6pKImaoc1RnWlYoxvzg0h+o=; b=iLeyNRrUiZ6u8S AbIc/f8CqHdLwgoo+H1ClzxNJ7RseWkWBV+986TIlzKu2eF0UkO3ta+MVRR1/2fNvxZEFFZdSyNgM Oa8MOHB8CZP9yK4eCgYL/C5dqOs2ZqT5PxyJIWrkKmgXegCuUNXWb54g5I3/6LWJm/Hnsq4X48vpK av7fDvbAm0njEYtyLCuXF9Pl5+VrPhuwsKAyEH+bopxPG3NoD/RU9I0+N1w0JHS7JIcKrz5WXInNd 7pyxwWKtPNlsHwTaLO8pPAiZI7rlG10HwCzPx9Lc/DDbVNrHTjkdHf292gf0EhPJXBXaNnPpztdGa 4s4Wtmr9sY896QDeXWfQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7ok-00000002zt5-1kZ0; Fri, 14 Jun 2024 14:22:14 +0000 Received: from mail-pl1-x62d.google.com ([2607:f8b0:4864:20::62d]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7oh-00000002zs7-1z3M for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:13 +0000 Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1f717ee193fso16484105ad.0 for ; Fri, 14 Jun 2024 07:22:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374930; x=1718979730; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=x4rb9oTxKJjPNdgedRtYvM9+kkgOfbw//+2IpuNP3u0=; b=Gxmqxuzw7Rlgk2SkhVdM0825vif+novB30bxcvtSY/TwszOizQ9gwjC3tx00BUtCF/ qwtzN6fhLOAavcRtdv9rQiuz7xjQMeGE5U7bB/CzeM+Vu01IKNZAnzuqDFD7UtdQdUGI oxySwn/NzJkLE868Dv33WN4BeTPAIk4k7A1S1p3nRwQBoWIFR7FOzTsyzNCCkBcF1u/5 QIM/0/L4z5JjNhGbczb6O7TmsETMSWv93W60LReNOcXKWbH7gGUbg0vfLiB2Gt4NpaiI 4sR2LrkI/8SndVRcPX/DY55sw8cWUir3zwz/JI2AW43Z7qcxxjmq+v7nv49/0Hjf4FwE eIfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374930; x=1718979730; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=x4rb9oTxKJjPNdgedRtYvM9+kkgOfbw//+2IpuNP3u0=; b=AWe9X3eDDY9/he706rtH37c0i+Pf0Vgv8JPkTXMFnel8H41JQHaPrV/GtxU3xLspVd O7crc4FrJHv1XqvKx2uH+PydILoDgApC3SyhDJQ5HEguamoWJrLza1soIOoyh8BhIAcb sHKm7fCWoeF5+RGpN0r8NVDECbjvP75dnGk4QHllL1QD9fmOBIS9qPlEW7UFkt0AzPDG cerPU5NY2snNiSRmMISrkIWAlgJlHdw6vkUP/AfvfUxQVZdCljHidDDoZVWSds936HLa 5PzM/H7yEuNm0F1l19zf5J40XVjbqmkqXsKNFds8tepKcdy25+xb7DzTwaJo1mrY1CcI mc0A== X-Forwarded-Encrypted: i=1; AJvYcCWR1GWWmb4/P8gaWTU7aRnIzHZMJHHkUCaY9RzrsWFqv0SurActJ2XSm7fqOadAceMOWYSfG85VaqvSIfpyf1jaXw6F74ci2qsAY8t958r2 X-Gm-Message-State: AOJu0YyPDfaGPyeVzr7SlFHlbnjJDiyEbS/GVfy6B4SJ8FQQx4RYqSam t9aQh9wYIFZ2SRME3ftQohyrQ89rbRmNGPkfHcT1nZdpfAOD1mgFglhw8lPMV1g= X-Google-Smtp-Source: AGHT+IEJt+rLxgOJ7sJAERmwv+9zCjQ5igS1rVXxOX0yXhtIC/U/Sk1uEt2Pa5Lkl3QWCML+x9LzJA== X-Received: by 2002:a17:902:cec6:b0:1f7:36a8:671d with SMTP id d9443c01a7336-1f8625cf720mr34825615ad.25.1718374930121; Fri, 14 Jun 2024 07:22:10 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:09 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Zong Li Subject: [RFC PATCH v2 01/10] iommu/riscv: add RISC-V IOMMU PMU support Date: Fri, 14 Jun 2024 22:21:47 +0800 Message-Id: <20240614142156.29420-2-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072211_550187_D991F02F X-CRM114-Status: GOOD ( 25.43 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch implements the RISC-V IOMMU hardware performance monitor, it includes the counting ans sampling mode. Specification doesn't define the event ID for counting the number of clock cycles, there is no associated iohpmevt0. But we need an event for counting cycle in perf, reserve the maximum number of event ID for it now. Signed-off-by: Zong Li --- drivers/iommu/riscv/Makefile | 2 +- drivers/iommu/riscv/iommu-bits.h | 16 ++ drivers/iommu/riscv/iommu-pmu.c | 479 +++++++++++++++++++++++++++++++ drivers/iommu/riscv/iommu.h | 8 + 4 files changed, 504 insertions(+), 1 deletion(-) create mode 100644 drivers/iommu/riscv/iommu-pmu.c diff --git a/drivers/iommu/riscv/Makefile b/drivers/iommu/riscv/Makefile index f54c9ed17d41..d36625a1fd08 100644 --- a/drivers/iommu/riscv/Makefile +++ b/drivers/iommu/riscv/Makefile @@ -1,3 +1,3 @@ # SPDX-License-Identifier: GPL-2.0-only -obj-$(CONFIG_RISCV_IOMMU) += iommu.o iommu-platform.o +obj-$(CONFIG_RISCV_IOMMU) += iommu.o iommu-platform.o iommu-pmu.o obj-$(CONFIG_RISCV_IOMMU_PCI) += iommu-pci.o diff --git a/drivers/iommu/riscv/iommu-bits.h b/drivers/iommu/riscv/iommu-bits.h index 98daf0e1a306..60523449f016 100644 --- a/drivers/iommu/riscv/iommu-bits.h +++ b/drivers/iommu/riscv/iommu-bits.h @@ -17,6 +17,7 @@ #include #include #include +#include /* * Chapter 5: Memory Mapped register interface @@ -207,6 +208,7 @@ enum riscv_iommu_ddtp_modes { /* 5.22 Performance monitoring event counters (31 * 64bits) */ #define RISCV_IOMMU_REG_IOHPMCTR_BASE 0x0068 #define RISCV_IOMMU_REG_IOHPMCTR(_n) (RISCV_IOMMU_REG_IOHPMCTR_BASE + ((_n) * 0x8)) +#define RISCV_IOMMU_IOHPMCTR_COUNTER GENMASK_ULL(63, 0) /* 5.23 Performance monitoring event selectors (31 * 64bits) */ #define RISCV_IOMMU_REG_IOHPMEVT_BASE 0x0160 @@ -250,6 +252,20 @@ enum riscv_iommu_hpmevent_id { RISCV_IOMMU_HPMEVENT_MAX = 9 }; +/* Use maximum event ID for cycle event */ +#define RISCV_IOMMU_HPMEVENT_CYCLE GENMASK_ULL(14, 0) + +#define RISCV_IOMMU_HPM_COUNTER_NUM 32 + +struct riscv_iommu_pmu { + struct pmu pmu; + void __iomem *reg; + int num_counters; + u64 mask_counter; + struct perf_event *events[RISCV_IOMMU_IOHPMEVT_CNT + 1]; + DECLARE_BITMAP(used_counters, RISCV_IOMMU_IOHPMEVT_CNT + 1); +}; + /* 5.24 Translation request IOVA (64bits) */ #define RISCV_IOMMU_REG_TR_REQ_IOVA 0x0258 #define RISCV_IOMMU_TR_REQ_IOVA_VPN GENMASK_ULL(63, 12) diff --git a/drivers/iommu/riscv/iommu-pmu.c b/drivers/iommu/riscv/iommu-pmu.c new file mode 100644 index 000000000000..5fc45aaf4ca3 --- /dev/null +++ b/drivers/iommu/riscv/iommu-pmu.c @@ -0,0 +1,479 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2024 SiFive + * + * Authors + * Zong Li + */ + +#include + +#include "iommu.h" +#include "iommu-bits.h" + +#define to_riscv_iommu_pmu(p) (container_of(p, struct riscv_iommu_pmu, pmu)) + +#define RISCV_IOMMU_PMU_ATTR_EXTRACTOR(_name, _mask) \ + static inline u32 get_##_name(struct perf_event *event) \ + { \ + return FIELD_GET(_mask, event->attr.config); \ + } \ + +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(event, RISCV_IOMMU_IOHPMEVT_EVENTID); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(partial_matching, RISCV_IOMMU_IOHPMEVT_DMASK); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(pid_pscid, RISCV_IOMMU_IOHPMEVT_PID_PSCID); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(did_gscid, RISCV_IOMMU_IOHPMEVT_DID_GSCID); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(filter_pid_pscid, RISCV_IOMMU_IOHPMEVT_PV_PSCV); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(filter_did_gscid, RISCV_IOMMU_IOHPMEVT_DV_GSCV); +RISCV_IOMMU_PMU_ATTR_EXTRACTOR(filter_id_type, RISCV_IOMMU_IOHPMEVT_IDT); + +/* Formats */ +PMU_FORMAT_ATTR(event, "config:0-14"); +PMU_FORMAT_ATTR(partial_matching, "config:15"); +PMU_FORMAT_ATTR(pid_pscid, "config:16-35"); +PMU_FORMAT_ATTR(did_gscid, "config:36-59"); +PMU_FORMAT_ATTR(filter_pid_pscid, "config:60"); +PMU_FORMAT_ATTR(filter_did_gscid, "config:61"); +PMU_FORMAT_ATTR(filter_id_type, "config:62"); + +static struct attribute *riscv_iommu_pmu_formats[] = { + &format_attr_event.attr, + &format_attr_partial_matching.attr, + &format_attr_pid_pscid.attr, + &format_attr_did_gscid.attr, + &format_attr_filter_pid_pscid.attr, + &format_attr_filter_did_gscid.attr, + &format_attr_filter_id_type.attr, + NULL, +}; + +static const struct attribute_group riscv_iommu_pmu_format_group = { + .name = "format", + .attrs = riscv_iommu_pmu_formats, +}; + +/* Events */ +static ssize_t riscv_iommu_pmu_event_show(struct device *dev, + struct device_attribute *attr, + char *page) +{ + struct perf_pmu_events_attr *pmu_attr; + + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); + + return sprintf(page, "event=0x%02llx\n", pmu_attr->id); +} + +PMU_EVENT_ATTR(cycle, event_attr_cycle, + RISCV_IOMMU_HPMEVENT_CYCLE, riscv_iommu_pmu_event_show); +PMU_EVENT_ATTR(dont_count, event_attr_dont_count, + RISCV_IOMMU_HPMEVENT_INVALID, riscv_iommu_pmu_event_show); +PMU_EVENT_ATTR(untranslated_req, event_attr_untranslated_req, + RISCV_IOMMU_HPMEVENT_URQ, riscv_iommu_pmu_event_show); +PMU_EVENT_ATTR(translated_req, event_attr_translated_req, + RISCV_IOMMU_HPMEVENT_TRQ, riscv_iommu_pmu_event_show); +PMU_EVENT_ATTR(ats_trans_req, event_attr_ats_trans_req, + RISCV_IOMMU_HPMEVENT_ATS_RQ, riscv_iommu_pmu_event_show); +PMU_EVENT_ATTR(tlb_miss, event_attr_tlb_miss, + RISCV_IOMMU_HPMEVENT_TLB_MISS, riscv_iommu_pmu_event_show); +PMU_EVENT_ATTR(ddt_walks, event_attr_ddt_walks, + RISCV_IOMMU_HPMEVENT_DD_WALK, riscv_iommu_pmu_event_show); +PMU_EVENT_ATTR(pdt_walks, event_attr_pdt_walks, + RISCV_IOMMU_HPMEVENT_PD_WALK, riscv_iommu_pmu_event_show); +PMU_EVENT_ATTR(s_vs_pt_walks, event_attr_s_vs_pt_walks, + RISCV_IOMMU_HPMEVENT_S_VS_WALKS, riscv_iommu_pmu_event_show); +PMU_EVENT_ATTR(g_pt_walks, event_attr_g_pt_walks, + RISCV_IOMMU_HPMEVENT_G_WALKS, riscv_iommu_pmu_event_show); + +static struct attribute *riscv_iommu_pmu_events[] = { + &event_attr_cycle.attr.attr, + &event_attr_dont_count.attr.attr, + &event_attr_untranslated_req.attr.attr, + &event_attr_translated_req.attr.attr, + &event_attr_ats_trans_req.attr.attr, + &event_attr_tlb_miss.attr.attr, + &event_attr_ddt_walks.attr.attr, + &event_attr_pdt_walks.attr.attr, + &event_attr_s_vs_pt_walks.attr.attr, + &event_attr_g_pt_walks.attr.attr, + NULL, +}; + +static const struct attribute_group riscv_iommu_pmu_events_group = { + .name = "events", + .attrs = riscv_iommu_pmu_events, +}; + +static const struct attribute_group *riscv_iommu_pmu_attr_grps[] = { + &riscv_iommu_pmu_format_group, + &riscv_iommu_pmu_events_group, + NULL, +}; + +/* PMU Operations */ +static void riscv_iommu_pmu_set_counter(struct riscv_iommu_pmu *pmu, u32 idx, + u64 value) +{ + void __iomem *addr = pmu->reg + RISCV_IOMMU_REG_IOHPMCYCLES; + + if (WARN_ON_ONCE(idx < 0 || idx > pmu->num_counters)) + return; + + writeq(FIELD_PREP(RISCV_IOMMU_IOHPMCTR_COUNTER, value), addr + idx * 8); +} + +static u64 riscv_iommu_pmu_get_counter(struct riscv_iommu_pmu *pmu, u32 idx) +{ + void __iomem *addr = pmu->reg + RISCV_IOMMU_REG_IOHPMCYCLES; + u64 value; + + if (WARN_ON_ONCE(idx < 0 || idx > pmu->num_counters)) + return -EINVAL; + + value = readq(addr + idx * 8); + + return FIELD_GET(RISCV_IOMMU_IOHPMCTR_COUNTER, value); +} + +static u64 riscv_iommu_pmu_get_event(struct riscv_iommu_pmu *pmu, u32 idx) +{ + void __iomem *addr = pmu->reg + RISCV_IOMMU_REG_IOHPMEVT_BASE; + + if (WARN_ON_ONCE(idx < 0 || idx > pmu->num_counters)) + return 0; + + /* There is no associtated IOHPMEVT0 for IOHPMCYCLES */ + if (idx == 0) + return 0; + + return readq(addr + (idx - 1) * 8); +} + +static void riscv_iommu_pmu_set_event(struct riscv_iommu_pmu *pmu, u32 idx, + u64 value) +{ + void __iomem *addr = pmu->reg + RISCV_IOMMU_REG_IOHPMEVT_BASE; + + if (WARN_ON_ONCE(idx < 0 || idx > pmu->num_counters)) + return; + + /* There is no associtated IOHPMEVT0 for IOHPMCYCLES */ + if (idx == 0) + return; + + writeq(value, addr + (idx - 1) * 8); +} + +static void riscv_iommu_pmu_enable_counter(struct riscv_iommu_pmu *pmu, u32 idx) +{ + void __iomem *addr = pmu->reg + RISCV_IOMMU_REG_IOCOUNTINH; + u32 value = readl(addr); + + writel(value & ~BIT(idx), addr); +} + +static void riscv_iommu_pmu_disable_counter(struct riscv_iommu_pmu *pmu, u32 idx) +{ + void __iomem *addr = pmu->reg + RISCV_IOMMU_REG_IOCOUNTINH; + u32 value = readl(addr); + + writel(value | BIT(idx), addr); +} + +static void riscv_iommu_pmu_enable_ovf_intr(struct riscv_iommu_pmu *pmu, u32 idx) +{ + u64 value; + + if (get_event(pmu->events[idx]) == RISCV_IOMMU_HPMEVENT_CYCLE) { + value = riscv_iommu_pmu_get_counter(pmu, idx) & ~RISCV_IOMMU_IOHPMCYCLES_OF; + writeq(value, pmu->reg + RISCV_IOMMU_REG_IOHPMCYCLES); + } else { + value = riscv_iommu_pmu_get_event(pmu, idx) & ~RISCV_IOMMU_IOHPMEVT_OF; + writeq(value, pmu->reg + RISCV_IOMMU_REG_IOHPMEVT_BASE + (idx - 1) * 8); + } +} + +static void riscv_iommu_pmu_disable_ovf_intr(struct riscv_iommu_pmu *pmu, u32 idx) +{ + u64 value; + + if (get_event(pmu->events[idx]) == RISCV_IOMMU_HPMEVENT_CYCLE) { + value = riscv_iommu_pmu_get_counter(pmu, idx) | RISCV_IOMMU_IOHPMCYCLES_OF; + writeq(value, pmu->reg + RISCV_IOMMU_REG_IOHPMCYCLES); + } else { + value = riscv_iommu_pmu_get_event(pmu, idx) | RISCV_IOMMU_IOHPMEVT_OF; + writeq(value, pmu->reg + RISCV_IOMMU_REG_IOHPMEVT_BASE + (idx - 1) * 8); + } +} + +static void riscv_iommu_pmu_start_all(struct riscv_iommu_pmu *pmu) +{ + int idx; + + for_each_set_bit(idx, pmu->used_counters, pmu->num_counters) { + riscv_iommu_pmu_enable_ovf_intr(pmu, idx); + riscv_iommu_pmu_enable_counter(pmu, idx); + } +} + +static void riscv_iommu_pmu_stop_all(struct riscv_iommu_pmu *pmu) +{ + writel(GENMASK_ULL(pmu->num_counters - 1, 0), + pmu->reg + RISCV_IOMMU_REG_IOCOUNTINH); +} + +/* PMU APIs */ +static int riscv_iommu_pmu_set_period(struct perf_event *event) +{ + struct riscv_iommu_pmu *pmu = to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + s64 left = local64_read(&hwc->period_left); + s64 period = hwc->sample_period; + u64 max_period = pmu->mask_counter; + int ret = 0; + + if (unlikely(left <= -period)) { + left = period; + local64_set(&hwc->period_left, left); + hwc->last_period = period; + ret = 1; + } + + if (unlikely(left <= 0)) { + left += period; + local64_set(&hwc->period_left, left); + hwc->last_period = period; + ret = 1; + } + + /* + * Limit the maximum period to prevent the counter value + * from overtaking the one we are about to program. In + * effect we are reducing max_period to account for + * interrupt latency (and we are being very conservative). + */ + if (left > (max_period >> 1)) + left = (max_period >> 1); + + local64_set(&hwc->prev_count, (u64)-left); + riscv_iommu_pmu_set_counter(pmu, hwc->idx, (u64)(-left) & max_period); + perf_event_update_userpage(event); + + return ret; +} + +static int riscv_iommu_pmu_event_init(struct perf_event *event) +{ + struct riscv_iommu_pmu *pmu = to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + + hwc->idx = -1; + hwc->config = event->attr.config; + + if (!is_sampling_event(event)) { + /* + * For non-sampling runs, limit the sample_period to half + * of the counter width. That way, the new counter value + * is far less likely to overtake the previous one unless + * you have some serious IRQ latency issues. + */ + hwc->sample_period = pmu->mask_counter >> 1; + hwc->last_period = hwc->sample_period; + local64_set(&hwc->period_left, hwc->sample_period); + } + + return 0; +} + +static void riscv_iommu_pmu_update(struct perf_event *event) +{ + struct hw_perf_event *hwc = &event->hw; + struct riscv_iommu_pmu *pmu = to_riscv_iommu_pmu(event->pmu); + u64 delta, prev, now; + u32 idx = hwc->idx; + + do { + prev = local64_read(&hwc->prev_count); + now = riscv_iommu_pmu_get_counter(pmu, idx); + } while (local64_cmpxchg(&hwc->prev_count, prev, now) != prev); + + delta = FIELD_GET(RISCV_IOMMU_IOHPMCTR_COUNTER, now - prev) & pmu->mask_counter; + local64_add(delta, &event->count); + local64_sub(delta, &hwc->period_left); +} + +static void riscv_iommu_pmu_start(struct perf_event *event, int flags) +{ + struct riscv_iommu_pmu *pmu = to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + + if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED))) + return; + + if (flags & PERF_EF_RELOAD) + WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE)); + + hwc->state = 0; + riscv_iommu_pmu_set_period(event); + riscv_iommu_pmu_set_event(pmu, hwc->idx, hwc->config); + riscv_iommu_pmu_enable_ovf_intr(pmu, hwc->idx); + riscv_iommu_pmu_enable_counter(pmu, hwc->idx); + + perf_event_update_userpage(event); +} + +static void riscv_iommu_pmu_stop(struct perf_event *event, int flags) +{ + struct riscv_iommu_pmu *pmu = to_riscv_iommu_pmu(event->pmu); + struct hw_perf_event *hwc = &event->hw; + + if (hwc->state & PERF_HES_STOPPED) + return; + + riscv_iommu_pmu_set_event(pmu, hwc->idx, RISCV_IOMMU_HPMEVENT_INVALID); + riscv_iommu_pmu_disable_counter(pmu, hwc->idx); + + if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) + riscv_iommu_pmu_update(event); + + hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; +} + +static int riscv_iommu_pmu_add(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + struct riscv_iommu_pmu *pmu = to_riscv_iommu_pmu(event->pmu); + unsigned int num_counters = pmu->num_counters; + int idx; + + /* Reserve index zero for iohpmcycles */ + if (get_event(event) == RISCV_IOMMU_HPMEVENT_CYCLE) + idx = 0; + else + idx = find_next_zero_bit(pmu->used_counters, num_counters, 1); + + if (idx == num_counters) + return -EAGAIN; + + set_bit(idx, pmu->used_counters); + + pmu->events[idx] = event; + hwc->idx = idx; + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; + + if (flags & PERF_EF_START) + riscv_iommu_pmu_start(event, flags); + + /* Propagate changes to the userspace mapping. */ + perf_event_update_userpage(event); + + return 0; +} + +static void riscv_iommu_pmu_read(struct perf_event *event) +{ + riscv_iommu_pmu_update(event); +} + +static void riscv_iommu_pmu_del(struct perf_event *event, int flags) +{ + struct hw_perf_event *hwc = &event->hw; + struct riscv_iommu_pmu *pmu = to_riscv_iommu_pmu(event->pmu); + int idx = hwc->idx; + + riscv_iommu_pmu_stop(event, PERF_EF_UPDATE); + pmu->events[idx] = NULL; + clear_bit(idx, pmu->used_counters); + perf_event_update_userpage(event); +} + +irqreturn_t riscv_iommu_pmu_handle_irq(struct riscv_iommu_pmu *pmu) +{ + struct perf_sample_data data; + struct pt_regs *regs; + u32 ovf = readl(pmu->reg + RISCV_IOMMU_REG_IOCOUNTOVF); + int idx; + + if (!ovf) + return IRQ_NONE; + + riscv_iommu_pmu_stop_all(pmu); + + regs = get_irq_regs(); + + for_each_set_bit(idx, (unsigned long *)&ovf, pmu->num_counters) { + struct perf_event *event = pmu->events[idx]; + struct hw_perf_event *hwc; + + if (WARN_ON_ONCE(!event) || !is_sampling_event(event)) + continue; + + hwc = &event->hw; + + riscv_iommu_pmu_update(event); + perf_sample_data_init(&data, 0, hwc->last_period); + if (!riscv_iommu_pmu_set_period(event)) + continue; + + if (perf_event_overflow(event, &data, regs)) + riscv_iommu_pmu_stop(event, 0); + } + + riscv_iommu_pmu_start_all(pmu); + + return IRQ_HANDLED; +} + +int riscv_iommu_pmu_init(struct riscv_iommu_pmu *pmu, void __iomem *reg, + const char *dev_name) +{ + char *name; + int ret; + + pmu->reg = reg; + pmu->num_counters = RISCV_IOMMU_HPM_COUNTER_NUM; + pmu->mask_counter = RISCV_IOMMU_IOHPMCTR_COUNTER; + + pmu->pmu = (struct pmu) { + .task_ctx_nr = perf_invalid_context, + .event_init = riscv_iommu_pmu_event_init, + .add = riscv_iommu_pmu_add, + .del = riscv_iommu_pmu_del, + .start = riscv_iommu_pmu_start, + .stop = riscv_iommu_pmu_stop, + .read = riscv_iommu_pmu_read, + .attr_groups = riscv_iommu_pmu_attr_grps, + .capabilities = PERF_PMU_CAP_NO_EXCLUDE, + .module = THIS_MODULE, + }; + + name = kasprintf(GFP_KERNEL, "riscv_iommu_pmu_%s", dev_name); + + ret = perf_pmu_register(&pmu->pmu, name, -1); + if (ret) { + pr_err("Failed to register riscv_iommu_pmu_%s: %d\n", + dev_name, ret); + return ret; + } + + /* Stop all counters and later start the counter with perf */ + riscv_iommu_pmu_stop_all(pmu); + + pr_info("riscv_iommu_pmu_%s: Registered with %d counters\n", + dev_name, pmu->num_counters); + + return 0; +} + +void riscv_iommu_pmu_uninit(struct riscv_iommu_pmu *pmu) +{ + int idx; + + /* Disable interrupt and functions */ + for_each_set_bit(idx, pmu->used_counters, pmu->num_counters) { + riscv_iommu_pmu_disable_counter(pmu, idx); + riscv_iommu_pmu_disable_ovf_intr(pmu, idx); + } + + perf_pmu_unregister(&pmu->pmu); +} diff --git a/drivers/iommu/riscv/iommu.h b/drivers/iommu/riscv/iommu.h index b1c4664542b4..92659a8a75ae 100644 --- a/drivers/iommu/riscv/iommu.h +++ b/drivers/iommu/riscv/iommu.h @@ -60,11 +60,19 @@ struct riscv_iommu_device { unsigned int ddt_mode; dma_addr_t ddt_phys; u64 *ddt_root; + + /* hardware performance monitor */ + struct riscv_iommu_pmu pmu; }; int riscv_iommu_init(struct riscv_iommu_device *iommu); void riscv_iommu_remove(struct riscv_iommu_device *iommu); +int riscv_iommu_pmu_init(struct riscv_iommu_pmu *pmu, void __iomem *reg, + const char *name); +void riscv_iommu_pmu_uninit(struct riscv_iommu_pmu *pmu); +irqreturn_t riscv_iommu_pmu_handle_irq(struct riscv_iommu_pmu *pmu); + #define riscv_iommu_readl(iommu, addr) \ readl_relaxed((iommu)->reg + (addr)) From patchwork Fri Jun 14 14:21:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0C7EAC27C77 for ; Fri, 14 Jun 2024 14:22:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zNJuOSKdOWFnyvctfkACtU3Dyn8IyIXkrkOLy0tQFbM=; b=2A1ToEZWoOpq7G ZnW6Q19/2yjBkS2dOgFlNPttEsQI7Or28PwNlxYIPsGoA/o9bVJi0EQn0GGGHMNSu8vu7cq3xsRZF E9hKI+ru1Pp4rOERu9n1s796TBaOK6JJnGXq5VR1GYxv1yqgAwNPOX8yFt90MriNaqgOMPyG4E8FH c3AzSeKCQ80jL1NknAeoGlo6q7upNvnODmoZWTYNyvNuAiEy9wN21UghBXNatqltDiwWqtOanwU2m 9zgd/dGjTeFdGyx9hOy/3FWQSzVyIaPZywfQi2chbZEpECGylJRtVPBNjXsBOpAMje7KUyOiHOP57 Hyh41U285157o2mSjSLA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7oo-00000002zuT-1AtV; Fri, 14 Jun 2024 14:22:18 +0000 Received: from mail-pl1-x62b.google.com ([2607:f8b0:4864:20::62b]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7ok-00000002zt2-3kMH for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:16 +0000 Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1f6b0a40721so15911695ad.2 for ; Fri, 14 Jun 2024 07:22:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374933; x=1718979733; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=tlFDKoGPkyb3nGW5qL4azlR55hFLQbYBhLCB+CenNg8=; b=Sm4oIIs3WkuXEcLUZ3lJLa/AJ9zaVpIfZcCUlgjVoQBip0D/UuRKCKpD+pi8Xv1pLG 3db1AVqry+pMKFaX1CzMU7wWgyTdHsVLKagiIHGFywBZ845p5qwLC5XGoIKuCx4PZLND G6QrhEneKkCfABZGJfG8M2uG/0MWNMjXifcYzDvesoSNo8efTTeuArpo2V/eQ4yNXcfz MBeBZ8tS3vEV4S3+X4/SPVSQGGm4sCE45t/4H9StpZqO4NrK9h9O4hsgwE8iLUQUXikP kiGuI9kir65uM51jw/DrU4AJa99QHv+1jTwBnwkBv4mdDNzrsDBD5JZdxoTUiIksCl+h cxzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374933; x=1718979733; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tlFDKoGPkyb3nGW5qL4azlR55hFLQbYBhLCB+CenNg8=; b=qO2csrIq0CD8WEnnD+EmQXvZ8CvrLadhMjKuVfThifzapXBt8+qcmexa3e1Ku1L5Bp 2muM4QnWElIJMzoVr9XffpNem9hxRfJLMNClKOoBfYRYIvgkmv1LMJpAdK/v8ImEsUur elTce01pgmQ5x6mh1WCuALtlp0J2WZfTpEHPcY5esIo4XwiMIwzbfKGqhYlH1zX57X7W ZpbEAWAzv69YjtSaw09VCoEkKeNzriee3YxYwAQloRqQF1OwvS2iSOEimuQHg6QX6LDq 47sVeSIYagl59wlQOzRBv7vS/vUVAG+j4zz1ctHHQOrwU+9sTZFzJcsEIUQmFn1Bh57f 73AQ== X-Forwarded-Encrypted: i=1; AJvYcCVfZy73aQHU8cHH+CSfkrqHTXeOp7F772t4LnmZ9tDHjnimz9Wl04kv6Z0zdJ4YxpEvnFo8/aJNFgS3Zng7p8sbA3Sjkp6+Y3Kv4o5oCTcd X-Gm-Message-State: AOJu0YxKCvN4hcQYrq49RB0h9SYg6evqfEi1HCTs65VqjJecAkOT22a7 ojqKHyVaPWlUz3s3RgWNgh66PxxfEu0i20tQ88LCkg06LFT2rknmMeJ7OXblk5o= X-Google-Smtp-Source: AGHT+IE4q8GqM7rVw79GCJsFYmYJxenhwqq3dMfDvs+NWax7S8ItEx42VF35t3MI4KJSDGoZTtuT2g== X-Received: by 2002:a17:902:7883:b0:1f7:35e0:5af4 with SMTP id d9443c01a7336-1f8627e18bcmr22056005ad.30.1718374933563; Fri, 14 Jun 2024 07:22:13 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:13 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Zong Li Subject: [RFC PATCH v2 02/10] iommu/riscv: support HPM and interrupt handling Date: Fri, 14 Jun 2024 22:21:48 +0800 Message-Id: <20240614142156.29420-3-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072214_962464_9C14902F X-CRM114-Status: GOOD ( 13.41 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch initialize the pmu stuff and uninitialize it when driver removing. The interrupt handling is also provided, this handler need to be primary handler instead of thread function, because pt_regs is empty when threading the IRQ, but pt_regs is necessary by perf_event_overflow. Signed-off-by: Zong Li --- drivers/iommu/riscv/iommu.c | 65 +++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c index 8b6a64c1ad8d..1716b2251f38 100644 --- a/drivers/iommu/riscv/iommu.c +++ b/drivers/iommu/riscv/iommu.c @@ -540,6 +540,62 @@ static irqreturn_t riscv_iommu_fltq_process(int irq, void *data) return IRQ_HANDLED; } +/* + * IOMMU Hardware performance monitor + */ + +/* HPM interrupt primary handler */ +static irqreturn_t riscv_iommu_hpm_irq_handler(int irq, void *dev_id) +{ + struct riscv_iommu_device *iommu = (struct riscv_iommu_device *)dev_id; + + /* Process pmu irq */ + riscv_iommu_pmu_handle_irq(&iommu->pmu); + + /* Clear performance monitoring interrupt pending */ + riscv_iommu_writel(iommu, RISCV_IOMMU_REG_IPSR, RISCV_IOMMU_IPSR_PMIP); + + return IRQ_HANDLED; +} + +/* HPM initialization */ +static int riscv_iommu_hpm_enable(struct riscv_iommu_device *iommu) +{ + int rc; + + if (!(iommu->caps & RISCV_IOMMU_CAPABILITIES_HPM)) + return 0; + + /* + * pt_regs is empty when threading the IRQ, but pt_regs is necessary + * by perf_event_overflow. Use primary handler instead of thread + * function for PM IRQ. + * + * Set the IRQF_ONESHOT flag because this IRQ might be shared with + * other threaded IRQs by other queues. + */ + rc = devm_request_irq(iommu->dev, + iommu->irqs[riscv_iommu_queue_vec(iommu, RISCV_IOMMU_IPSR_PMIP)], + riscv_iommu_hpm_irq_handler, IRQF_ONESHOT | IRQF_SHARED, NULL, iommu); + if (rc) + return rc; + + return riscv_iommu_pmu_init(&iommu->pmu, iommu->reg, dev_name(iommu->dev)); +} + +/* HPM uninitialization */ +static void riscv_iommu_hpm_disable(struct riscv_iommu_device *iommu) +{ + if (!(iommu->caps & RISCV_IOMMU_CAPABILITIES_HPM)) + return; + + devm_free_irq(iommu->dev, + iommu->irqs[riscv_iommu_queue_vec(iommu, RISCV_IOMMU_IPSR_PMIP)], + iommu); + + riscv_iommu_pmu_uninit(&iommu->pmu); +} + /* Lookup and initialize device context info structure. */ static struct riscv_iommu_dc *riscv_iommu_get_dc(struct riscv_iommu_device *iommu, unsigned int devid) @@ -1612,6 +1668,9 @@ void riscv_iommu_remove(struct riscv_iommu_device *iommu) riscv_iommu_iodir_set_mode(iommu, RISCV_IOMMU_DDTP_IOMMU_MODE_OFF); riscv_iommu_queue_disable(&iommu->cmdq); riscv_iommu_queue_disable(&iommu->fltq); + + if (iommu->caps & RISCV_IOMMU_CAPABILITIES_HPM) + riscv_iommu_pmu_uninit(&iommu->pmu); } int riscv_iommu_init(struct riscv_iommu_device *iommu) @@ -1651,6 +1710,10 @@ int riscv_iommu_init(struct riscv_iommu_device *iommu) if (rc) goto err_queue_disable; + rc = riscv_iommu_hpm_enable(iommu); + if (rc) + goto err_hpm_disable; + rc = iommu_device_sysfs_add(&iommu->iommu, NULL, NULL, "riscv-iommu@%s", dev_name(iommu->dev)); if (rc) { @@ -1669,6 +1732,8 @@ int riscv_iommu_init(struct riscv_iommu_device *iommu) err_remove_sysfs: iommu_device_sysfs_remove(&iommu->iommu); err_iodir_off: + riscv_iommu_hpm_disable(iommu); +err_hpm_disable: riscv_iommu_iodir_set_mode(iommu, RISCV_IOMMU_DDTP_IOMMU_MODE_OFF); err_queue_disable: riscv_iommu_queue_disable(&iommu->fltq); From patchwork Fri Jun 14 14:21:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698744 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E5DE8C27C77 for ; Fri, 14 Jun 2024 14:22:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=j5Pd+VYwUHqtIGgWnciynplvt40otSnBWl0ioqYC2iw=; b=luxuD/+m16UuAF aGpNWdj33SbiQHVuD9vkUvg5QuBTgDJCiuIKAojs3vecUBdZH03DaxcnMBjKmqvF+/9+P8ncVnDvL 9kVNoQUD+sbZD3jGMjmZ16AoQPDhMtmaYuliofsJZDdK2aNuet+X4QYiBAPy2rSZGzSnCVcdICdpH VyXJMdHbPRJaj6rV/z0OldSGQo39ys5SnHOwHSw3BPnDUJ/cK1L6yJQhHuvWXEu9rI8MVI4fcBQXV IoZ4wEi/3HMPjWW+wJU+77Wf2nEqNkZCT4aopi9jVp0iHsQDiiBR10Y25oZLU6s/B0w2hBjTcPL3o gTzK9EpkJMEvFA5zfqZw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7ot-00000002zwr-1SOM; Fri, 14 Jun 2024 14:22:23 +0000 Received: from mail-pl1-x635.google.com ([2607:f8b0:4864:20::635]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7oo-00000002zuS-3cbF for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:21 +0000 Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-1f480624d10so18632785ad.1 for ; Fri, 14 Jun 2024 07:22:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374937; x=1718979737; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=KYeKWdSVxdzca9c5AXktAohba5fPqz4T3QpSza6gBEI=; b=aa2HuoRfGVqd4u3iC+7G22qab56+RWPAkl1m274IRsNN3DSr0Jww8fyE9yOif32EJ6 sXdnoUAJWRah+brrlelZWjtsgEhopdGjKSIo6mY6dHN+9bK7spYkbZ8i8rHfwAszjx6z IEH7fEDhVt4vGRGD6qoAzriQg7GcMFwbyM9Q6GlA49mzhWKMvj3jLWAJICtLDOyRpNEg HbQRIBYOoJJxddoFvQZ+KMr0CSfQqMCpr+n12RRyyNGWUOFNB9t7jAGSD71O7sjq7GUn wdspY3UYHoFQH9BhDhlmngdbRTrN6Uv0+2Y9adEDXfH3dJHs97aDUun/mVHxSwKO8VJN xtJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374937; x=1718979737; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KYeKWdSVxdzca9c5AXktAohba5fPqz4T3QpSza6gBEI=; b=wrbW0LuavpIKHL00eZCD2GOZK1jXUqvO5O/C3AYXXAE4tFQv9o/+8X9I3XTS/FfXxX V5sVQ4/nvXo2vL1nZ75A01FlB9OZ5RCBoGozLg/7PY0JXqAMVZN0lytGAI/l4EpARwlO uJEiaYZu/uimdd2M2aDR5EuwgeRnp30xcrPXKtyy+ZX/fF8rqgO0sDU1DMbJvrB5Bcpi nC/V07+kAimmQBHbMgdmwdtNP9YyKYMYLSZymCxy38x+IeWgBGDyoXx4bunfikM0EzEP 38QGKJPMevIIqchVRBcVuFyJ01oJf0yVIQaAhh6oWbDwrOrkXVhsDxNvbRV8/KyihQoR HwNw== X-Forwarded-Encrypted: i=1; AJvYcCUEqzQjbizuwOONw9mTZp0EsV0Kkp+WEwFZR14Z/2Gaw01A1jNBqKPS4CNQsjrt7DcOPp0GDgC/punOEhkreecbDYNA874HjujzvHTNJKEc X-Gm-Message-State: AOJu0YwFcilUfnhT2jWP8oRutx4EK8BQSKac2ITdTRCpq/XEQCXkuv6P TjYP42AhdzqYHJiEAXeX8kxSaxferFn9KFXnv/gswYX2i7Uz1CFBKXyLhlSIs5E= X-Google-Smtp-Source: AGHT+IEaapuErqClzYkwdWIAoziKliaNOF4tmsWVaA2CtvzOJToxjwlNEcROXRtWO3Y8fdwM3JdA+g== X-Received: by 2002:a17:903:1250:b0:1f7:2050:9a76 with SMTP id d9443c01a7336-1f8625c0d68mr33773445ad.8.1718374937555; Fri, 14 Jun 2024 07:22:17 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:17 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Zong Li Subject: [RFC PATCH v2 03/10] iommu/riscv: use data structure instead of individual values Date: Fri, 14 Jun 2024 22:21:49 +0800 Message-Id: <20240614142156.29420-4-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072218_989351_855CD009 X-CRM114-Status: GOOD ( 12.29 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org The parameter will be increased when we need to set up more bit fields in the device context. Use a data structure to wrap them up. Signed-off-by: Zong Li --- drivers/iommu/riscv/iommu.c | 31 +++++++++++++++++++------------ 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c index 1716b2251f38..9aeb4b20c145 100644 --- a/drivers/iommu/riscv/iommu.c +++ b/drivers/iommu/riscv/iommu.c @@ -1045,7 +1045,7 @@ static void riscv_iommu_iotlb_inval(struct riscv_iommu_domain *domain, * interim translation faults. */ static void riscv_iommu_iodir_update(struct riscv_iommu_device *iommu, - struct device *dev, u64 fsc, u64 ta) + struct device *dev, struct riscv_iommu_dc *new_dc) { struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct riscv_iommu_dc *dc; @@ -1079,10 +1079,10 @@ static void riscv_iommu_iodir_update(struct riscv_iommu_device *iommu, for (i = 0; i < fwspec->num_ids; i++) { dc = riscv_iommu_get_dc(iommu, fwspec->ids[i]); tc = READ_ONCE(dc->tc); - tc |= ta & RISCV_IOMMU_DC_TC_V; + tc |= new_dc->ta & RISCV_IOMMU_DC_TC_V; - WRITE_ONCE(dc->fsc, fsc); - WRITE_ONCE(dc->ta, ta & RISCV_IOMMU_PC_TA_PSCID); + WRITE_ONCE(dc->fsc, new_dc->fsc); + WRITE_ONCE(dc->ta, new_dc->ta & RISCV_IOMMU_PC_TA_PSCID); /* Update device context, write TC.V as the last step. */ dma_wmb(); WRITE_ONCE(dc->tc, tc); @@ -1369,20 +1369,20 @@ static int riscv_iommu_attach_paging_domain(struct iommu_domain *iommu_domain, struct riscv_iommu_domain *domain = iommu_domain_to_riscv(iommu_domain); struct riscv_iommu_device *iommu = dev_to_iommu(dev); struct riscv_iommu_info *info = dev_iommu_priv_get(dev); - u64 fsc, ta; + struct riscv_iommu_dc dc = {0}; if (!riscv_iommu_pt_supported(iommu, domain->pgd_mode)) return -ENODEV; - fsc = FIELD_PREP(RISCV_IOMMU_PC_FSC_MODE, domain->pgd_mode) | - FIELD_PREP(RISCV_IOMMU_PC_FSC_PPN, virt_to_pfn(domain->pgd_root)); - ta = FIELD_PREP(RISCV_IOMMU_PC_TA_PSCID, domain->pscid) | - RISCV_IOMMU_PC_TA_V; + dc.fsc = FIELD_PREP(RISCV_IOMMU_PC_FSC_MODE, domain->pgd_mode) | + FIELD_PREP(RISCV_IOMMU_PC_FSC_PPN, virt_to_pfn(domain->pgd_root)); + dc.ta = FIELD_PREP(RISCV_IOMMU_PC_TA_PSCID, domain->pscid) | + RISCV_IOMMU_PC_TA_V; if (riscv_iommu_bond_link(domain, dev)) return -ENOMEM; - riscv_iommu_iodir_update(iommu, dev, fsc, ta); + riscv_iommu_iodir_update(iommu, dev, &dc); riscv_iommu_bond_unlink(info->domain, dev); info->domain = domain; @@ -1484,9 +1484,12 @@ static int riscv_iommu_attach_blocking_domain(struct iommu_domain *iommu_domain, { struct riscv_iommu_device *iommu = dev_to_iommu(dev); struct riscv_iommu_info *info = dev_iommu_priv_get(dev); + struct riscv_iommu_dc dc = {0}; + + dc.fsc = RISCV_IOMMU_FSC_BARE; /* Make device context invalid, translation requests will fault w/ #258 */ - riscv_iommu_iodir_update(iommu, dev, RISCV_IOMMU_FSC_BARE, 0); + riscv_iommu_iodir_update(iommu, dev, &dc); riscv_iommu_bond_unlink(info->domain, dev); info->domain = NULL; @@ -1505,8 +1508,12 @@ static int riscv_iommu_attach_identity_domain(struct iommu_domain *iommu_domain, { struct riscv_iommu_device *iommu = dev_to_iommu(dev); struct riscv_iommu_info *info = dev_iommu_priv_get(dev); + struct riscv_iommu_dc dc = {0}; + + dc.fsc = RISCV_IOMMU_FSC_BARE; + dc.ta = RISCV_IOMMU_PC_TA_V; - riscv_iommu_iodir_update(iommu, dev, RISCV_IOMMU_FSC_BARE, RISCV_IOMMU_PC_TA_V); + riscv_iommu_iodir_update(iommu, dev, &dc); riscv_iommu_bond_unlink(info->domain, dev); info->domain = NULL; From patchwork Fri Jun 14 14:21:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698745 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 252A4C27C6E for ; Fri, 14 Jun 2024 14:22:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=my6AuU8ROLWm2/OD0WHVtkwCW52ZWD0MnRZcYiVzDFQ=; b=aJXqMtgmM7JRdE 5fFWhBRD2t8nY3STzCVJjRGGu2qds9506d4jQWjMVjQz2yRU7GVZmKg92zBb33qlRp0OT9OH8/bBm QMvTGhQRv+X/cWdQchtpjc5bj9NQrQqXiEsUNX75tTKIkWpGkAGq+kYkBzz2SuoIA1AjHyVp38rZf s83+EVOlhT0AEcblOJ7joHsLFF39KRJJ6VCT7wasIeOqayOPnK5SAfPJdJaftXqtcy84TRBjiLI9T h46gziC+WWl7o1u7KjbC8j3+2UwKCR35tPjuhJRFVTvX9u27lSq0WumK0cgC8joxYmjhhgLVtajJJ XLJ9JmeWXIsAbuClslWQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7ow-00000002zz5-0RV9; Fri, 14 Jun 2024 14:22:26 +0000 Received: from mail-pl1-x62f.google.com ([2607:f8b0:4864:20::62f]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7os-00000002zvv-3lK2 for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:24 +0000 Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1f6559668e1so19741975ad.3 for ; Fri, 14 Jun 2024 07:22:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374940; x=1718979740; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=M7iZUV99NQN3RgJM91vO01sfzoKXhBrbq1qPqNaXoaI=; b=SBIT4OnzMiuZXekFqsawXW4lQtKArVOWVWE5sbdpWphdb8NYkUsHGwdD/IbLWomqCu 7W/olX2QtopH4HBAjjOx8Rs+5pRkkhoPlMvWoiWYC7bbIM38q0yfuX4rr0R/qtALC55R dHWCqDWUkTz2kQ6snFBH71dXu6NZ9vo2axtWpBSkT64nl64uOlZLfQgWHhy6+NX1xQGG YH8P0F9FkiCV5Sxu1PaPrVs04UU22Knxg6JDnZ6vW8cxEtP3wPkx+gHNmBcIgyk1G+N0 Yce5Wci8EpGRj1W3tZGCQt3EZEJMe5rMqbxOxjDPLdcISNrYSBxcoIpG2NFJLKKSsxy3 lbSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374940; x=1718979740; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=M7iZUV99NQN3RgJM91vO01sfzoKXhBrbq1qPqNaXoaI=; b=VizgX66Wp7KojKP7Glfqj4XZvsjkFgnL1A/kMVWwopZDDIjIbn+vzgCViifMhCKoCY wnuhJBuCtM8Fo81ZcgGT2ZTWIBtPHunUG2cfmqnPunXPjyKVOFknnZpgaGGDyc/+o2CI GW1rTDtwbx1u+VWhFpdK3yiVUg9DnkDdjq7KWXcDneEIyZbsphRWF2xGEWjZbxIj99fi mXDpYh1CqukvvUyBZoFi9T1b3Vq0Bj/m//oNeRy9izRIw0HVuplv+8IevKu+6R4eVKIK UdUlHB5dozLeOZbCuIBgrMbibsJBg3Lm+M2I0RFBcWq40C4aTXQ09xCPfNg778cHfovX OJ1Q== X-Forwarded-Encrypted: i=1; AJvYcCX4RjI4OevG/uC4FbY8uECJy/mz5BBAG/UsbDoP093OcahtUqx6CksTsEFr7eVikJ4056jD7maRmDGix+bGLrYZyEIb+mNdc+z3pFH1vorU X-Gm-Message-State: AOJu0YzJofvSNEnoLb/7rmbNo1q8SiWo9NGTlP4NeKoS/4RR0DZhpU1Z TWPsag0DT9zD20PzfmwcKYwAuamXK9M6vbo/urpcGw++xXfuu9Tb2NoG2DT4Zw4= X-Google-Smtp-Source: AGHT+IGJERSTIzaOjggWWP0rQrcqidvG0Fn1shO0SdaiGkk+fF3agyv6rp5JLRqhxN5jNm2HLZv71g== X-Received: by 2002:a17:902:6906:b0:1f7:21fd:ab83 with SMTP id d9443c01a7336-1f8629feaeemr22250365ad.54.1718374940068; Fri, 14 Jun 2024 07:22:20 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:19 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Zong Li Subject: [RFC PATCH v2 04/10] iommu/riscv: add iotlb_sync_map operation support Date: Fri, 14 Jun 2024 22:21:50 +0800 Message-Id: <20240614142156.29420-5-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072222_998019_E25E872F X-CRM114-Status: GOOD ( 10.09 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Add iotlb_sync_map operation for flush IOTLB. Software must flush the IOTLB after each page table. Signed-off-by: Zong Li --- drivers/iommu/riscv/Makefile | 1 + drivers/iommu/riscv/iommu.c | 11 +++++++++++ 2 files changed, 12 insertions(+) diff --git a/drivers/iommu/riscv/Makefile b/drivers/iommu/riscv/Makefile index d36625a1fd08..f02ce6ebfbd0 100644 --- a/drivers/iommu/riscv/Makefile +++ b/drivers/iommu/riscv/Makefile @@ -1,3 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-only obj-$(CONFIG_RISCV_IOMMU) += iommu.o iommu-platform.o iommu-pmu.o obj-$(CONFIG_RISCV_IOMMU_PCI) += iommu-pci.o +obj-$(CONFIG_SIFIVE_IOMMU) += iommu-sifive.o diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c index 9aeb4b20c145..df7aeb2571ae 100644 --- a/drivers/iommu/riscv/iommu.c +++ b/drivers/iommu/riscv/iommu.c @@ -1115,6 +1115,16 @@ static void riscv_iommu_iotlb_sync(struct iommu_domain *iommu_domain, riscv_iommu_iotlb_inval(domain, gather->start, gather->end); } +static int riscv_iommu_iotlb_sync_map(struct iommu_domain *iommu_domain, + unsigned long iova, size_t size) +{ + struct riscv_iommu_domain *domain = iommu_domain_to_riscv(iommu_domain); + + riscv_iommu_iotlb_inval(domain, iova, iova + size - 1); + + return 0; +} + static inline size_t get_page_size(size_t size) { if (size >= IOMMU_PAGE_SIZE_512G) @@ -1396,6 +1406,7 @@ static const struct iommu_domain_ops riscv_iommu_paging_domain_ops = { .unmap_pages = riscv_iommu_unmap_pages, .iova_to_phys = riscv_iommu_iova_to_phys, .iotlb_sync = riscv_iommu_iotlb_sync, + .iotlb_sync_map = riscv_iommu_iotlb_sync_map, .flush_iotlb_all = riscv_iommu_iotlb_flush_all, }; From patchwork Fri Jun 14 14:21:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698746 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4FE35C27C79 for ; Fri, 14 Jun 2024 14:22:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=uD8puqJ0aB7gkNbxQQniBAb8KmBKchswtrLBEpxZ2h0=; b=mwhvO5By64nWbs 4GlvO9kfXeSNvF03j/5kvGnBCCcywUz2rDCdMuD1RUk2CbQwHPfaZBH8JhctcCd8fYE2q7T5oEDhb 0mN+zKgvHrX87D+UUKcotpLBmSn0QgaOn4faCE/Hy6wzM1lt1dQYZBQCymxkByjPrE43iU+UlgRUa F/SvGema+v7OFipmA3kUvLjmBqQRsqgfh1vBAhBa+Kj4t3eNINklt04kgYZZ5jL7uHx0eVBfcWenx DdJVmAd8FoHZp/SD6TqAzGkLo9OT/lo0ZcPMWvZ02j7A/FBmcYGkhgWNX2gzU0ftgSomRoaiu4ypj gQWCDVyKlTrWNAhcsIcg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7oz-0000000301Q-1j6B; Fri, 14 Jun 2024 14:22:29 +0000 Received: from mail-pl1-x629.google.com ([2607:f8b0:4864:20::629]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7ot-00000002zwn-3fqA for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:25 +0000 Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-1f44b45d6abso17889835ad.0 for ; Fri, 14 Jun 2024 07:22:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374942; x=1718979742; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=3Daef6kUbuf4+6WFIjDcTl7WQW1O4tQM7jDjBoLtjZ4=; b=a+LtSVRFql0tZowRHa9yVVCxSGbIRhnxA97CTBJVFoJvzimEmhra2iymA+kugnIpXL ZVBfu8I2aqMIYkQo7YnJJKXoA1OI0hsjT9/woyyq8jq0uG9KURpSAV7OXxLa6qO/IfQ3 hsXDY1T7isWfLfZRPsbzkEN/1Iq1PK0ht5LjVXlJRYkOgzxDeHJpqAaOTWM9octf8AEv Jsl3pM83H14ysE8OdAuSdGeEIl+EW/5qOzR3+Rz+jCw3W6tgzNRNnIl8fV/SS0XEUrje KD1C2wpKWo3NQeJW8bKjbLp3pKrv9WBQgEycbfkj9Nrs7GOwgmPQSju/94XpFo93qAaS tAoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374942; x=1718979742; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3Daef6kUbuf4+6WFIjDcTl7WQW1O4tQM7jDjBoLtjZ4=; b=sxcactXg+qL4pvNz6qAm2VK8egrBb3p962TGy4Bk/mjCApiGVCCo03LZoORu9U9Xmm 9sZaRmB3Yrp8QYFfY3g0C/S0kYqQ8eS/C8asUb4zSpYgbEVYVK+04rs8apV/C9WvcK9s G0zGmRMeOGOacXQSzb7vV69VbugRFgH3OamaOhIiXNXFsw2iP79T3mKz9EIRsoF8re4c oE3zmyDTluunLUjkCRriH1PF4Z9Vpw85fEsOqBqI2k3NpOpOueNs5gcS0gXqyJ8b4gkN gbuYDpeE0B9TrpaHUwJupEZbBunC4764Fs1t88TF5nIEjnfDR2ltf/M4BortQy1gyROE hMEA== X-Forwarded-Encrypted: i=1; AJvYcCXiEBqNVnXg5AhssVCwptPcdBtNepBxl+/F83AidriXo0Qz/nPWxFhKuJFNUHgEb/YUqZy1IYxkEfkS9OpF9ufQHBCrGzvjGVcsE57EHPrD X-Gm-Message-State: AOJu0Yw4+pJEYkzxuFX9PIpi0NB07mM6VeXKH8rZzy1Z/l8qWgo7Vcif QhhEuRpJaK6O9hOcR+dPEqJat+vUWPii7H6xAAs27pQtjyovlgUQBeZ2tKGqL7M= X-Google-Smtp-Source: AGHT+IFqgOZYvBJ4kxh5nI8iWhYcYwWgqXcjlHpYLYmYilMYTECy6aBNWzRZ7wIFJH/yInxDq6z0lg== X-Received: by 2002:a17:903:32ca:b0:1f7:c33:aa92 with SMTP id d9443c01a7336-1f8625c06f1mr32662045ad.9.1718374942583; Fri, 14 Jun 2024 07:22:22 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:22 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Zong Li Subject: [RFC PATCH v2 05/10] iommu/riscv: support GSCID and GVMA invalidation command Date: Fri, 14 Jun 2024 22:21:51 +0800 Message-Id: <20240614142156.29420-6-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072223_965783_8DCAB6DD X-CRM114-Status: GOOD ( 17.59 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch adds a ID Allocator for GSCID and a wrap for setting up GSCID in IOTLB invalidation command. Set up iohgatp to enable second stage table and flush stage-2 table if the GSCID is set. The GSCID of domain should be freed when release domain. GSCID will be allocated for parent domain in nested IOMMU process. Signed-off-by: Zong Li --- drivers/iommu/riscv/iommu-bits.h | 7 ++++++ drivers/iommu/riscv/iommu.c | 39 ++++++++++++++++++++++++++++---- 2 files changed, 41 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/riscv/iommu-bits.h b/drivers/iommu/riscv/iommu-bits.h index 60523449f016..214735a335fd 100644 --- a/drivers/iommu/riscv/iommu-bits.h +++ b/drivers/iommu/riscv/iommu-bits.h @@ -731,6 +731,13 @@ static inline void riscv_iommu_cmd_inval_vma(struct riscv_iommu_command *cmd) cmd->dword1 = 0; } +static inline void riscv_iommu_cmd_inval_gvma(struct riscv_iommu_command *cmd) +{ + cmd->dword0 = FIELD_PREP(RISCV_IOMMU_CMD_OPCODE, RISCV_IOMMU_CMD_IOTINVAL_OPCODE) | + FIELD_PREP(RISCV_IOMMU_CMD_FUNC, RISCV_IOMMU_CMD_IOTINVAL_FUNC_GVMA); + cmd->dword1 = 0; +} + static inline void riscv_iommu_cmd_inval_set_addr(struct riscv_iommu_command *cmd, u64 addr) { diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c index df7aeb2571ae..45309bd096e5 100644 --- a/drivers/iommu/riscv/iommu.c +++ b/drivers/iommu/riscv/iommu.c @@ -45,6 +45,10 @@ static DEFINE_IDA(riscv_iommu_pscids); #define RISCV_IOMMU_MAX_PSCID (BIT(20) - 1) +/* IOMMU GSCID allocation namespace. */ +static DEFINE_IDA(riscv_iommu_gscids); +#define RISCV_IOMMU_MAX_GSCID (BIT(16) - 1) + /* Device resource-managed allocations */ struct riscv_iommu_devres { void *addr; @@ -845,6 +849,7 @@ struct riscv_iommu_domain { struct list_head bonds; spinlock_t lock; /* protect bonds list updates. */ int pscid; + int gscid; int amo_enabled:1; int numa_node; unsigned int pgd_mode; @@ -993,20 +998,33 @@ static void riscv_iommu_iotlb_inval(struct riscv_iommu_domain *domain, rcu_read_lock(); prev = NULL; + list_for_each_entry_rcu(bond, &domain->bonds, list) { iommu = dev_to_iommu(bond->dev); /* * IOTLB invalidation request can be safely omitted if already sent - * to the IOMMU for the same PSCID, and with domain->bonds list + * to the IOMMU for the same PSCID/GSCID, and with domain->bonds list * arranged based on the device's IOMMU, it's sufficient to check * last device the invalidation was sent to. */ if (iommu == prev) continue; - riscv_iommu_cmd_inval_vma(&cmd); - riscv_iommu_cmd_inval_set_pscid(&cmd, domain->pscid); + /* + * S2 domain needs to flush entries in stage-2 page table, its + * bond list has host devices and pass-through devices, the GVMA + * command is no effect on host devices, because there are no + * mapping of host devices in stage-2 page table. + */ + if (domain->gscid) { + riscv_iommu_cmd_inval_gvma(&cmd); + riscv_iommu_cmd_inval_set_gscid(&cmd, domain->gscid); + } else { + riscv_iommu_cmd_inval_vma(&cmd); + riscv_iommu_cmd_inval_set_pscid(&cmd, domain->pscid); + } + if (len && len < RISCV_IOMMU_IOTLB_INVAL_LIMIT) { for (iova = start; iova < end; iova += PAGE_SIZE) { riscv_iommu_cmd_inval_set_addr(&cmd, iova); @@ -1015,6 +1033,7 @@ static void riscv_iommu_iotlb_inval(struct riscv_iommu_domain *domain, } else { riscv_iommu_cmd_send(iommu, &cmd); } + prev = iommu; } @@ -1083,6 +1102,7 @@ static void riscv_iommu_iodir_update(struct riscv_iommu_device *iommu, WRITE_ONCE(dc->fsc, new_dc->fsc); WRITE_ONCE(dc->ta, new_dc->ta & RISCV_IOMMU_PC_TA_PSCID); + WRITE_ONCE(dc->iohgatp, new_dc->iohgatp); /* Update device context, write TC.V as the last step. */ dma_wmb(); WRITE_ONCE(dc->tc, tc); @@ -1354,6 +1374,9 @@ static void riscv_iommu_free_paging_domain(struct iommu_domain *iommu_domain) if ((int)domain->pscid > 0) ida_free(&riscv_iommu_pscids, domain->pscid); + if ((int)domain->gscid > 0) + ida_free(&riscv_iommu_gscids, domain->gscid); + riscv_iommu_pte_free(domain, _io_pte_entry(pfn, _PAGE_TABLE), NULL); kfree(domain); } @@ -1384,8 +1407,14 @@ static int riscv_iommu_attach_paging_domain(struct iommu_domain *iommu_domain, if (!riscv_iommu_pt_supported(iommu, domain->pgd_mode)) return -ENODEV; - dc.fsc = FIELD_PREP(RISCV_IOMMU_PC_FSC_MODE, domain->pgd_mode) | - FIELD_PREP(RISCV_IOMMU_PC_FSC_PPN, virt_to_pfn(domain->pgd_root)); + if (domain->gscid) + dc.iohgatp = FIELD_PREP(RISCV_IOMMU_DC_IOHGATP_MODE, domain->pgd_mode) | + FIELD_PREP(RISCV_IOMMU_DC_IOHGATP_GSCID, domain->gscid) | + FIELD_PREP(RISCV_IOMMU_DC_IOHGATP_PPN, virt_to_pfn(domain->pgd_root)); + else + dc.fsc = FIELD_PREP(RISCV_IOMMU_PC_FSC_MODE, domain->pgd_mode) | + FIELD_PREP(RISCV_IOMMU_PC_FSC_PPN, virt_to_pfn(domain->pgd_root)); + dc.ta = FIELD_PREP(RISCV_IOMMU_PC_TA_PSCID, domain->pscid) | RISCV_IOMMU_PC_TA_V; From patchwork Fri Jun 14 14:21:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 44C10C27C77 for ; Fri, 14 Jun 2024 14:22:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=cfpiQuwDp+S3HlPLrzgB9RjIEgzSEjhEjAYq4CqCHb4=; b=yefLQibuywaq5H AzBt/qF+9ua92Vs4Ety0TJqF7c/AldUS3A4E1ZRPuTb+j+ngHD7/EKlQgVI+c/fQq5Ofjd9q7qGvQ 4CKdo6HeTTzQbyeAiunSD+wi91ww3i2nyK/ApRxrq3F+z7xkkUTSZNR5J0doq+ONNyXdAo/tcVuFx cQqGwwIBUCjsHlMILhqGD1dfTv09Rdb5YXrsvQhBY8cB0qdmEmBIKaraxU35e4vW7Hq1+OeDVPB85 A5ijvvCWT0yanNL/rynj+X5R5oKh6ooDzPbA62XchoqoNAKvaQNafloulnQYgsgAClmrgGVMj2cvl Rms41LGmoMNW9Dg6aEoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7p0-0000000302H-27Gh; Fri, 14 Jun 2024 14:22:30 +0000 Received: from mail-pl1-x630.google.com ([2607:f8b0:4864:20::630]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7ov-00000002zye-3L2w for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:27 +0000 Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-1f44b441b08so17984985ad.0 for ; Fri, 14 Jun 2024 07:22:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374945; x=1718979745; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=E4t/LszTGtZ4bHNAAc59INUfcR6I2TK1LJsq3cDy7I4=; b=VSvjngO3f4aPGI3i2VQqt+Q/Qln5/N/LX0k4X/v5N1Yty/trAno3+dWSl5FYbPls2O MEs8nSDq9M8OYtMw3Z33CZSjxFluDMwjjaY9YmbWDjOMOhE/fKlzabeBdG46I/Zgwio9 9qcTqFOyiIMDWdBp5tDhRl+gc1WXMma7cB5Lp7zzg1hADnTxtDOozoflKyRFwuc4feoE nNT3jvzinQ7mWCXLqzuprwJQu3KIldMPchi2EhGtGb/8tsdur+ktl1pd1BDv9ach0tTA 6+GV2/dLj4gD+YfN5lPD/irSztql+kr1tzvkF2bW4TgB0jHwHKKT6w+h/W58B0qZ8biA dIZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374945; x=1718979745; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=E4t/LszTGtZ4bHNAAc59INUfcR6I2TK1LJsq3cDy7I4=; b=Hpmb6tHQBtBwXaHWo7brsrf83biEUv2UOqtQK3B6tTbQ+akcen7rDcVeEBXtne1lIy bheC7i386+8E9TRPh76rwr6R843M4P4IisR3Wb0YaeAM/cmkK6n0eiAkLnBYHPl+6n/l QXnoxJIkFWdPYYbwsry2dJVCrIQgCeUae31L8R/YUa9XDaCgUdkcdz7xKLR3uDYpllJs wlc6FHN6zv5ttWwWdJ/X+e8es/YYdM9oCV/8jXB6xSIZ+KXyjYduAc3D39xXeBI4N6Jn cJjXUrxLesKIT3dDckYPXpu3il7GQmLudccByFgQKk4J9KWb89I8yhU7GZewJ4AabpN6 QzfQ== X-Forwarded-Encrypted: i=1; AJvYcCWHvyDLz5w8iZ0ZjWdnujp24AlQkLWKRuvJO4XGmErNRVmLfYQ+32RuBA4rNdzhROgrwwo0bt1jrHzY6HqSn6cFJylQh0SGQIGSzV+kLNEa X-Gm-Message-State: AOJu0YzowyeauyVcRFgkI+uftUpwcNtw5rgzXb4aMvFeZWswvHhrvCBf QdvriB96C5fXMnZ9oX9SG8BbfUE4n+LUS1iSODQkwdGSnlVCip9bPlDVOwQHynE= X-Google-Smtp-Source: AGHT+IEUOmCzKpXiteK6r7fTuiEwyVLmZFa6bNiW2D8099v/kAcWiUtRw3i/QaJjtdfif4s5IWfWAw== X-Received: by 2002:a17:902:f78b:b0:1f8:70d7:d7ee with SMTP id d9443c01a7336-1f870d7dc6bmr4584665ad.46.1718374945100; Fri, 14 Jun 2024 07:22:25 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:24 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Zong Li Subject: [RFC PATCH v2 06/10] iommu/riscv: support nested iommu for getting iommu hardware information Date: Fri, 14 Jun 2024 22:21:52 +0800 Message-Id: <20240614142156.29420-7-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072226_012395_CDF2993F X-CRM114-Status: GOOD ( 12.86 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch implements .hw_info operation and the related data structures for passing the IOMMU hardware capabilities for iommufd. Signed-off-by: Zong Li Reviewed-by: Jason Gunthorpe --- drivers/iommu/riscv/iommu.c | 20 ++++++++++++++++++++ include/uapi/linux/iommufd.h | 18 ++++++++++++++++++ 2 files changed, 38 insertions(+) diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c index 45309bd096e5..2130106e421f 100644 --- a/drivers/iommu/riscv/iommu.c +++ b/drivers/iommu/riscv/iommu.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "../iommu-pages.h" #include "iommu-bits.h" @@ -1567,6 +1568,24 @@ static struct iommu_domain riscv_iommu_identity_domain = { } }; +static void *riscv_iommu_hw_info(struct device *dev, u32 *length, u32 *type) +{ + struct riscv_iommu_device *iommu = dev_to_iommu(dev); + struct iommu_hw_info_riscv_iommu *info; + + info = kzalloc(sizeof(*info), GFP_KERNEL); + if (!info) + return ERR_PTR(-ENOMEM); + + info->capability = iommu->caps; + info->fctl = riscv_iommu_readl(iommu, RISCV_IOMMU_REG_FCTL); + + *length = sizeof(*info); + *type = IOMMU_HW_INFO_TYPE_RISCV_IOMMU; + + return info; +} + static int riscv_iommu_device_domain_type(struct device *dev) { return 0; @@ -1644,6 +1663,7 @@ static void riscv_iommu_release_device(struct device *dev) static const struct iommu_ops riscv_iommu_ops = { .pgsize_bitmap = SZ_4K, .of_xlate = riscv_iommu_of_xlate, + .hw_info = riscv_iommu_hw_info, .identity_domain = &riscv_iommu_identity_domain, .blocked_domain = &riscv_iommu_blocking_domain, .release_domain = &riscv_iommu_blocking_domain, diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 1dfeaa2e649e..736f4408b5e0 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -475,15 +475,33 @@ struct iommu_hw_info_vtd { __aligned_u64 ecap_reg; }; +/** + * struct iommu_hw_info_riscv_iommu - RISCV IOMMU hardware information + * + * @capability: Value of RISC-V IOMMU capability register defined in + * RISC-V IOMMU spec section 5.3 IOMMU capabilities + * @fctl: Value of RISC-V IOMMU feature control register defined in + * RISC-V IOMMU spec section 5.4 Features-control register + * + * Don't advertise ATS support to the guest because driver doesn't support it. + */ +struct iommu_hw_info_riscv_iommu { + __aligned_u64 capability; + __u32 fctl; + __u32 __reserved; +}; + /** * enum iommu_hw_info_type - IOMMU Hardware Info Types * @IOMMU_HW_INFO_TYPE_NONE: Used by the drivers that do not report hardware * info * @IOMMU_HW_INFO_TYPE_INTEL_VTD: Intel VT-d iommu info type + * @IOMMU_HW_INFO_TYPE_RISCV_IOMMU: RISC-V iommu info type */ enum iommu_hw_info_type { IOMMU_HW_INFO_TYPE_NONE, IOMMU_HW_INFO_TYPE_INTEL_VTD, + IOMMU_HW_INFO_TYPE_RISCV_IOMMU, }; /** From patchwork Fri Jun 14 14:21:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698748 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3CC23C27C6E for ; Fri, 14 Jun 2024 14:22:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=p48xZDHhedMIb1kfACWGibt9mNaODqqdxIf3fGxWp7M=; b=31lFu2fFJL4geA CJ2nXXnESSWBQGNA2l5mD+CC88Lze1daBsn0t7l3xTzoeDivqDiUzhRuUgRkCfsBfm8Z4NI+lhX2z DdGBOfTQLj9y8KR7Z8asbYydqOrNkgv1cX8uaF1L18AQyowvWNyV1VfoyjN4eo/rtCCT08UupW7nm 5y27MFLDiGSpHRSBclNrw2Bs0WoAhro3dpWM4B/7StMjpPSN1buyOqYODWzTo5dsl24tHOFWd20o8 75+MslgABAw++t3eVWzuPQc+4CMx91eREgcjiGqFn0JlkwzKSEWZw5ZugrVrsydmYUgTkgFg2LZen V6grrecI0ctNG9inGTgw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7p4-0000000304j-0Cmo; Fri, 14 Jun 2024 14:22:34 +0000 Received: from mail-pl1-x631.google.com ([2607:f8b0:4864:20::631]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7oz-0000000300x-1XgT for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:32 +0000 Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1f4a5344ec7so16231015ad.1 for ; Fri, 14 Jun 2024 07:22:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374948; x=1718979748; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=icfonVjdOGiS4jBGMCDTwtNC3iVuGDFaRUw0zwHlFBc=; b=RcggS3dk3S49swZwh5cj5Hs1Ftt5hLGdrJxWJ/B2/Tt9bNNDIO5wurOxK9k23I81Y1 a3N/+TVMrjioGGAI9+mj1XpgWy0CFUAfXMMoC8i1GQDVQ9cUCVF694oHX55KI/A5mel7 UxGRn9GokEkbh89kbIpLGOYwybLzvsMpUFemBPcPi/B3JWuyivk6TsX1TiCOZyQELcIK mHoihcwoyF8BFTUYzZmcXDNXwU7mDJ6e0kgF/ouuS4MK8s7Qd2adJH6bZiP6s+F6M90Z HtwZCYf/jkcJgb1em4WM+j1K2EwqfDk1Kpvbl1rXviBO0wFWeln9LZwo8d1VgFWnSTCb fqaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374948; x=1718979748; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=icfonVjdOGiS4jBGMCDTwtNC3iVuGDFaRUw0zwHlFBc=; b=eMaFro+Hr+vyNl64ApiihdB0MFOpaTwpBCqSk08ahDLBKWjKBses1hnXC42oFueiMk nmzK/E5rtLOJZvg28vGwu+/c+bwpxyGandO+gkyhFF2qXWrDQr51LT19p1C/Q3z4cKDq kIHM36hZdD1CZ/pFU9vKhK2ol8xgLI3/xwtFEvC59/vYKE802En1emP7u0s/PWOfHFD8 RDbwX2xOIE1xkHXvMxze0tr+KKfJy7cLh/cYpdGJdblCauY9wKKx4EC9lB1A7AR1cVLo ATZdvW2sQT3oMMpXDm1vlfGYEpZswKlJxJltqhzsLnNDlVOd1CMs4Gu3DmFd3jqIZfox 2/6w== X-Forwarded-Encrypted: i=1; AJvYcCWyqdcHtTYKA22xS6fuca4QjF7eUeTNyJZzdhaYda7E7WoKzOFydsWlLEQPC2l6TIe5ydAkgz+tW8bomeVC/Yk/a440UcbY7c8XY0dA4ouu X-Gm-Message-State: AOJu0YwMm7Cwm33Axog2EuZeXUCV6WFUgdl17Oj/mxsLXldTC+CECgdD HbtzVgvjKO9TYTVSGDA7YpMLeUl3nk6xJnwBsLSJuVJgpCP2JWUTx3jfcRAEVug= X-Google-Smtp-Source: AGHT+IHErCU6r7FcqlhN+6YKmphLPYai+yLyt0MfvZY472OSF5t8ZLzOGhTYEkvrbQPRWasAsRJaag== X-Received: by 2002:a17:902:d2ca:b0:1f7:207f:7081 with SMTP id d9443c01a7336-1f84e1d39e7mr99383395ad.14.1718374947672; Fri, 14 Jun 2024 07:22:27 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:27 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Zong Li Subject: [RFC PATCH v2 07/10] iommu/riscv: support nested iommu for creating domains owned by userspace Date: Fri, 14 Jun 2024 22:21:53 +0800 Message-Id: <20240614142156.29420-8-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072229_550195_F2A41ECD X-CRM114-Status: GOOD ( 25.25 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch implements .domain_alloc_user operation for creating domains owend by userspace, e.g. through IOMMUFD. Add s2 domain for parent domain for second stage, s1 domain will be the first stage. Don't remove IOMMU private data of dev in blocked domain, because it holds the user data of device, which is used when attaching device into s1 domain. Signed-off-by: Zong Li --- drivers/iommu/riscv/iommu.c | 236 ++++++++++++++++++++++++++++++++++- include/uapi/linux/iommufd.h | 17 +++ 2 files changed, 252 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c index 2130106e421f..410b236e9b24 100644 --- a/drivers/iommu/riscv/iommu.c +++ b/drivers/iommu/riscv/iommu.c @@ -846,6 +846,8 @@ static int riscv_iommu_iodir_set_mode(struct riscv_iommu_device *iommu, /* This struct contains protection domain specific IOMMU driver data. */ struct riscv_iommu_domain { + struct riscv_iommu_domain *s2; + struct riscv_iommu_device *iommu; struct iommu_domain domain; struct list_head bonds; spinlock_t lock; /* protect bonds list updates. */ @@ -863,6 +865,7 @@ struct riscv_iommu_domain { /* Private IOMMU data for managed devices, dev_iommu_priv_* */ struct riscv_iommu_info { struct riscv_iommu_domain *domain; + struct riscv_iommu_dc dc_user; }; /* @@ -1532,7 +1535,6 @@ static int riscv_iommu_attach_blocking_domain(struct iommu_domain *iommu_domain, /* Make device context invalid, translation requests will fault w/ #258 */ riscv_iommu_iodir_update(iommu, dev, &dc); riscv_iommu_bond_unlink(info->domain, dev); - info->domain = NULL; return 0; } @@ -1568,6 +1570,237 @@ static struct iommu_domain riscv_iommu_identity_domain = { } }; +/** + * Nested IOMMU operations + */ + +static int riscv_iommu_attach_dev_nested(struct iommu_domain *domain, struct device *dev) +{ + struct riscv_iommu_domain *riscv_domain = iommu_domain_to_riscv(domain); + struct riscv_iommu_device *iommu = dev_to_iommu(dev); + struct riscv_iommu_info *info = dev_iommu_priv_get(dev); + + /* + * Add bond to the new domain's list, but don't unlink in current domain. + * We need to flush entries in stage-2 page table by iterating the list. + */ + if (riscv_iommu_bond_link(riscv_domain, dev)) + return -ENOMEM; + + riscv_iommu_iotlb_inval(riscv_domain, 0, ULONG_MAX); + info->dc_user.ta |= RISCV_IOMMU_PC_TA_V; + riscv_iommu_iodir_update(iommu, dev, &info->dc_user); + + info->domain = riscv_domain; + + return 0; +} + +static void riscv_iommu_domain_free_nested(struct iommu_domain *domain) +{ + struct riscv_iommu_domain *riscv_domain = iommu_domain_to_riscv(domain); + struct riscv_iommu_bond *bond; + + /* Unlink bond in s2 domain, because we link bond both on s1 and s2 domain */ + list_for_each_entry_rcu(bond, &riscv_domain->s2->bonds, list) + riscv_iommu_bond_unlink(riscv_domain->s2, bond->dev); + + if ((int)riscv_domain->pscid > 0) + ida_free(&riscv_iommu_pscids, riscv_domain->pscid); + + kfree(riscv_domain); +} + +static const struct iommu_domain_ops riscv_iommu_nested_domain_ops = { + .attach_dev = riscv_iommu_attach_dev_nested, + .free = riscv_iommu_domain_free_nested, +}; + +static int +riscv_iommu_get_dc_user(struct device *dev, struct iommu_hwpt_riscv_iommu *user_arg) +{ + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); + struct riscv_iommu_device *iommu = dev_to_iommu(dev); + struct riscv_iommu_info *info = dev_iommu_priv_get(dev); + struct riscv_iommu_dc dc; + struct riscv_iommu_fq_record event; + u64 dc_len = sizeof(struct riscv_iommu_dc) >> + (!(iommu->caps & RISCV_IOMMU_CAPABILITIES_MSI_FLAT)); + u64 event_len = sizeof(struct riscv_iommu_fq_record); + void __user *event_user = NULL; + + for (int i = 0; i < fwspec->num_ids; i++) { + event.hdr = + FIELD_PREP(RISCV_IOMMU_FQ_HDR_CAUSE, RISCV_IOMMU_FQ_CAUSE_DDT_INVALID) | + FIELD_PREP(RISCV_IOMMU_FQ_HDR_DID, fwspec->ids[i]); + + /* Sanity check DC of stage-1 from user data */ + if (!user_arg->out_event_uptr || user_arg->event_len != event_len) + return -EINVAL; + + event_user = u64_to_user_ptr(user_arg->out_event_uptr); + + if (!user_arg->dc_uptr || user_arg->dc_len != dc_len) + return -EINVAL; + + if (copy_from_user(&dc, u64_to_user_ptr(user_arg->dc_uptr), dc_len)) + return -EFAULT; + + if (!(dc.tc & RISCV_IOMMU_DDTE_V)) { + dev_dbg(dev, "Invalid DDT from user data\n"); + if (copy_to_user(event_user, &event, event_len)) + return -EFAULT; + } + + if (!dc.fsc || dc.iohgatp) { + dev_dbg(dev, "Wrong page table from user data\n"); + if (copy_to_user(event_user, &event, event_len)) + return -EFAULT; + } + + /* Save DC of stage-1 from user data */ + memcpy(&info->dc_user, + riscv_iommu_get_dc(iommu, fwspec->ids[i]), + sizeof(struct riscv_iommu_dc)); + info->dc_user.fsc = dc.fsc; + } + + return 0; +} + +static struct iommu_domain * +riscv_iommu_domain_alloc_nested(struct device *dev, + struct iommu_domain *parent, + const struct iommu_user_data *user_data) +{ + struct riscv_iommu_domain *s2_domain = iommu_domain_to_riscv(parent); + struct riscv_iommu_domain *s1_domain; + struct riscv_iommu_device *iommu = dev_to_iommu(dev); + struct iommu_hwpt_riscv_iommu arg; + int ret, va_bits; + + if (user_data->type != IOMMU_HWPT_DATA_RISCV_IOMMU) + return ERR_PTR(-EOPNOTSUPP); + + if (parent->type != IOMMU_DOMAIN_UNMANAGED) + return ERR_PTR(-EINVAL); + + ret = iommu_copy_struct_from_user(&arg, + user_data, + IOMMU_HWPT_DATA_RISCV_IOMMU, + out_event_uptr); + if (ret) + return ERR_PTR(ret); + + s1_domain = kzalloc(sizeof(*s1_domain), GFP_KERNEL); + if (!s1_domain) + return ERR_PTR(-ENOMEM); + + spin_lock_init(&s1_domain->lock); + INIT_LIST_HEAD_RCU(&s1_domain->bonds); + + s1_domain->pscid = ida_alloc_range(&riscv_iommu_pscids, 1, + RISCV_IOMMU_MAX_PSCID, GFP_KERNEL); + if (s1_domain->pscid < 0) { + iommu_free_page(s1_domain->pgd_root); + kfree(s1_domain); + return ERR_PTR(-ENOMEM); + } + + /* Get device context of stage-1 from user*/ + ret = riscv_iommu_get_dc_user(dev, &arg); + if (ret) { + kfree(s1_domain); + return ERR_PTR(-EINVAL); + } + + if (!iommu) { + va_bits = VA_BITS; + } else if (iommu->caps & RISCV_IOMMU_CAPABILITIES_SV57) { + va_bits = 57; + } else if (iommu->caps & RISCV_IOMMU_CAPABILITIES_SV48) { + va_bits = 48; + } else if (iommu->caps & RISCV_IOMMU_CAPABILITIES_SV39) { + va_bits = 39; + } else { + dev_err(dev, "cannot find supported page table mode\n"); + return ERR_PTR(-ENODEV); + } + + /* + * The ops->domain_alloc_user could be directly called by the iommufd core, + * instead of iommu core. So, this function need to set the default value of + * following data member: + * - domain->pgsize_bitmap + * - domain->geometry + * - domain->type + * - domain->ops + */ + s1_domain->s2 = s2_domain; + s1_domain->iommu = iommu; + s1_domain->domain.type = IOMMU_DOMAIN_NESTED; + s1_domain->domain.ops = &riscv_iommu_nested_domain_ops; + s1_domain->domain.pgsize_bitmap = SZ_4K; + s1_domain->domain.geometry.aperture_start = 0; + s1_domain->domain.geometry.aperture_end = DMA_BIT_MASK(va_bits - 1); + s1_domain->domain.geometry.force_aperture = true; + + return &s1_domain->domain; +} + +static struct iommu_domain * +riscv_iommu_domain_alloc_user(struct device *dev, u32 flags, + struct iommu_domain *parent, + const struct iommu_user_data *user_data) +{ + struct iommu_domain *domain; + struct riscv_iommu_domain *riscv_domain; + + /* Allocate stage-1 domain if it has stage-2 parent domain */ + if (parent) + return riscv_iommu_domain_alloc_nested(dev, parent, user_data); + + if (flags & ~((IOMMU_HWPT_ALLOC_NEST_PARENT | IOMMU_HWPT_ALLOC_DIRTY_TRACKING))) + return ERR_PTR(-EOPNOTSUPP); + + if (user_data) + return ERR_PTR(-EINVAL); + + /* domain_alloc_user op needs to be fully initialized */ + domain = iommu_domain_alloc(dev->bus); + if (!domain) + return ERR_PTR(-ENOMEM); + + /* + * We assume that nest-parent or g-stage only will come here + * TODO: Shadow page table doesn't be supported now. + * We currently can't distinguish g-stage and shadow + * page table here. Shadow page table shouldn't be + * put at stage-2. + */ + riscv_domain = iommu_domain_to_riscv(domain); + + /* pgd_root may be allocated in .domain_alloc_paging */ + if (riscv_domain->pgd_root) + iommu_free_page(riscv_domain->pgd_root); + + riscv_domain->pgd_root = iommu_alloc_pages_node(riscv_domain->numa_node, + GFP_KERNEL_ACCOUNT, + 2); + if (!riscv_domain->pgd_root) + return ERR_PTR(-ENOMEM); + + riscv_domain->gscid = ida_alloc_range(&riscv_iommu_gscids, 1, + RISCV_IOMMU_MAX_GSCID, GFP_KERNEL); + if (riscv_domain->gscid < 0) { + iommu_free_pages(riscv_domain->pgd_root, 2); + kfree(riscv_domain); + return ERR_PTR(-ENOMEM); + } + + return domain; +} + static void *riscv_iommu_hw_info(struct device *dev, u32 *length, u32 *type) { struct riscv_iommu_device *iommu = dev_to_iommu(dev); @@ -1668,6 +1901,7 @@ static const struct iommu_ops riscv_iommu_ops = { .blocked_domain = &riscv_iommu_blocking_domain, .release_domain = &riscv_iommu_blocking_domain, .domain_alloc_paging = riscv_iommu_alloc_paging_domain, + .domain_alloc_user = riscv_iommu_domain_alloc_user, .def_domain_type = riscv_iommu_device_domain_type, .device_group = riscv_iommu_device_group, .probe_device = riscv_iommu_probe_device, diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 736f4408b5e0..514463fe85d3 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -390,14 +390,31 @@ struct iommu_hwpt_vtd_s1 { __u32 __reserved; }; +/** + * struct iommu_hwpt_riscv_iommu - RISCV IOMMU stage-1 device context table + * info (IOMMU_HWPT_TYPE_RISCV_IOMMU) + * @dc_len: Length of device context + * @dc_uptr: User pointer to the address of device context + * @event_len: Length of an event record + * @out_event_uptr: User pointer to the address of event record + */ +struct iommu_hwpt_riscv_iommu { + __aligned_u64 dc_len; + __aligned_u64 dc_uptr; + __aligned_u64 event_len; + __aligned_u64 out_event_uptr; +}; + /** * enum iommu_hwpt_data_type - IOMMU HWPT Data Type * @IOMMU_HWPT_DATA_NONE: no data * @IOMMU_HWPT_DATA_VTD_S1: Intel VT-d stage-1 page table + * @IOMMU_HWPT_DATA_RISCV_IOMMU: RISC-V IOMMU device context table */ enum iommu_hwpt_data_type { IOMMU_HWPT_DATA_NONE, IOMMU_HWPT_DATA_VTD_S1, + IOMMU_HWPT_DATA_RISCV_IOMMU, }; /** From patchwork Fri Jun 14 14:21:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7AEE0C27C6E for ; Fri, 14 Jun 2024 14:22:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=D4pgzVVBLtXwC6FVPb04DQLz4i/s1PEmN1dYGXqLCn4=; b=uX5/ueIMO1wASt cEG1JDN3Iclh5OZslTBFOUi/e+z9nRN2IOEOTrIpQudzT/Yk/3V0Z/MlwHzIZ5uQ4QPaW56/j1uSW +PA5wMiRKGw9RpJ2xa8tfT0EZquMnNrOZh5BiLvVFuEJnSe4cOOKeXmqy8eFz2BcfSgV9qO5bNGYW itYPCx2Pf8ecF3/er+a5Hsqpwf216hoEEM3ZO/WBEki2Is/PNc7o67xYifgpwWCMnWN/ZCjfwakUK +/iYwF9ybNQSD82b/Ti7gDst3K428LHx/ZVYDpd2amXPwDxFY6OXxuwLP3aTxPrqLKozwXvjnfaDV G4teyYgXEOA6FpSh6wpw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7p7-0000000307s-1xXa; Fri, 14 Jun 2024 14:22:37 +0000 Received: from mail-pl1-x62a.google.com ([2607:f8b0:4864:20::62a]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7p1-0000000302U-2oj2 for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:33 +0000 Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1f44b441b08so17985795ad.0 for ; Fri, 14 Jun 2024 07:22:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374950; x=1718979750; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=uqErywjP/CjLC2eaDMmEQxFN9AljnO4yCx1Hy/yfHnA=; b=nrPnaMXEGQYy85x44QK9YxoKyiBMAUVvXBU9SjXZlAWkZtNqjVvt/2cjvWDkI0/1fS sUFt3SXX09JSQ/JWBeBVs/6d+4VUJqV7y3UeV/wYPV3EP2WCs1fVs3R4MJ5FS5XD3cTj rx3dCjrprSid/KswbhPPc1x0UXvmuUPmlfz8pI/pwWc79/nX6aVwFNjCUUBGScAhBKY8 UnCIYpcRIB18asTQ3Ez00W7gaWNjScZTNSjXQJCFig1WKctNnTNypYqxi2aKTeSp0nxk amUKJG5I3+bdy4ihTqSii42iRVgZArVBpqu6ASbCN9EdeGLisjn6L6yA0KDZypSXyTPQ C6EA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374950; x=1718979750; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uqErywjP/CjLC2eaDMmEQxFN9AljnO4yCx1Hy/yfHnA=; b=FWWRsmTlWjwB+5VdIN2f2Ka24wXsIitLsNmL++xkHLOk37Ee6sHQuvSKm01Ra2Ozxc ODshrzItOE4gjlKKa7SKyIrTR/COOowhI+KGe0L/fH23rbZ8VN8LCFFz2JO94K6f7XXr UVM/xdmjYeG642c70vmMhjo4N2DEGhQ/ZTXh0H6ra/L/J3IvHAKxAO15hE4TUdydmpCm JR/+53dbmiBAHwVt2b8LRoH2wx1Hofr3g9Rs/8f230WdtzEHtjh6DMIuhoVlao8YcnKB AZe1Ie5jSH9jVCoTC5F+szxQf2qRYio/jiS9KJDX+T3D2no/25LqW8mJWUnZMhXOzfld uF1w== X-Forwarded-Encrypted: i=1; AJvYcCURkLP8p00gOMa6MKNp6Uwhy3dt0OU+o/fvuwRo/IrfS0FUcPZVdi9739vJtQzCMsohNjYiXbsaJjNxX251SMWU2OCyElWPURSiqAqGdxrF X-Gm-Message-State: AOJu0YyGkyG07yd+s9GEQ87Rg/WZ5EmhT5lI8Ja7cQU9MHWB5afOkGQ6 p8fNgpZCXoKWTbd247vlPG3y0FxV0W9dqOe409QT9/gpJnph6DHWNKg+TKs4H/I= X-Google-Smtp-Source: AGHT+IHSez2BcKKbBZwd377qNguyZGbzCpuJqffZQYRK1qpXa16+hD0f3KlRelSKzLQGJUtI0sNeSg== X-Received: by 2002:a17:903:1c4:b0:1f7:1525:ddfc with SMTP id d9443c01a7336-1f8625d9e30mr34295185ad.20.1718374950324; Fri, 14 Jun 2024 07:22:30 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:29 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Zong Li Subject: [RFC PATCH v2 08/10] iommu/riscv: support nested iommu for flushing cache Date: Fri, 14 Jun 2024 22:21:54 +0800 Message-Id: <20240614142156.29420-9-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072231_846844_66898513 X-CRM114-Status: GOOD ( 17.57 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org This patch implements cache_invalidate_user operation for the userspace to flush the hardware caches for a nested domain through iommufd. Signed-off-by: Zong Li --- drivers/iommu/riscv/iommu.c | 90 ++++++++++++++++++++++++++++++++++-- include/uapi/linux/iommufd.h | 11 +++++ 2 files changed, 97 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c index 410b236e9b24..d08eb0a2939e 100644 --- a/drivers/iommu/riscv/iommu.c +++ b/drivers/iommu/riscv/iommu.c @@ -1587,8 +1587,9 @@ static int riscv_iommu_attach_dev_nested(struct iommu_domain *domain, struct dev if (riscv_iommu_bond_link(riscv_domain, dev)) return -ENOMEM; - riscv_iommu_iotlb_inval(riscv_domain, 0, ULONG_MAX); - info->dc_user.ta |= RISCV_IOMMU_PC_TA_V; + if (riscv_iommu_bond_link(info->domain, dev)) + return -ENOMEM; + riscv_iommu_iodir_update(iommu, dev, &info->dc_user); info->domain = riscv_domain; @@ -1611,13 +1612,92 @@ static void riscv_iommu_domain_free_nested(struct iommu_domain *domain) kfree(riscv_domain); } +static int riscv_iommu_fix_user_cmd(struct riscv_iommu_command *cmd, + unsigned int pscid, unsigned int gscid) +{ + u32 opcode = FIELD_GET(RISCV_IOMMU_CMD_OPCODE, cmd->dword0); + + switch (opcode) { + case RISCV_IOMMU_CMD_IOTINVAL_OPCODE: + u32 func = FIELD_GET(RISCV_IOMMU_CMD_FUNC, cmd->dword0); + + if (func != RISCV_IOMMU_CMD_IOTINVAL_FUNC_GVMA && + func != RISCV_IOMMU_CMD_IOTINVAL_FUNC_VMA) { + pr_warn("The IOTINVAL function: 0x%x is not supported\n", + func); + return -EOPNOTSUPP; + } + + if (func == RISCV_IOMMU_CMD_IOTINVAL_FUNC_GVMA) { + cmd->dword0 &= ~RISCV_IOMMU_CMD_FUNC; + cmd->dword0 |= FIELD_PREP(RISCV_IOMMU_CMD_FUNC, + RISCV_IOMMU_CMD_IOTINVAL_FUNC_VMA); + } + + cmd->dword0 &= ~(RISCV_IOMMU_CMD_IOTINVAL_PSCID | + RISCV_IOMMU_CMD_IOTINVAL_GSCID); + riscv_iommu_cmd_inval_set_pscid(cmd, pscid); + riscv_iommu_cmd_inval_set_gscid(cmd, gscid); + break; + case RISCV_IOMMU_CMD_IODIR_OPCODE: + /* + * Ensure the device ID is right. We expect that VMM has + * transferred the device ID to host's from guest's. + */ + break; + default: + return -EOPNOTSUPP; + } + + return 0; +} + +static int riscv_iommu_cache_invalidate_user(struct iommu_domain *domain, + struct iommu_user_data_array *array) +{ + struct riscv_iommu_domain *riscv_domain = iommu_domain_to_riscv(domain); + struct iommu_hwpt_riscv_iommu_invalidate inv_info; + int ret, index; + + if (array->type != IOMMU_HWPT_INVALIDATE_DATA_RISCV_IOMMU) { + ret = -EINVAL; + goto out; + } + + for (index = 0; index < array->entry_num; index++) { + ret = iommu_copy_struct_from_user_array(&inv_info, array, + IOMMU_HWPT_INVALIDATE_DATA_RISCV_IOMMU, + index, cmd); + if (ret) + break; + + ret = riscv_iommu_fix_user_cmd((struct riscv_iommu_command *)inv_info.cmd, + riscv_domain->pscid, + riscv_domain->s2->gscid); + if (ret == -EOPNOTSUPP) + continue; + + riscv_iommu_cmd_send(riscv_domain->iommu, + (struct riscv_iommu_command *)inv_info.cmd); + riscv_iommu_cmd_sync(riscv_domain->iommu, + RISCV_IOMMU_IOTINVAL_TIMEOUT); + } + +out: + array->entry_num = index; + + return ret; +} + static const struct iommu_domain_ops riscv_iommu_nested_domain_ops = { .attach_dev = riscv_iommu_attach_dev_nested, .free = riscv_iommu_domain_free_nested, + .cache_invalidate_user = riscv_iommu_cache_invalidate_user, }; static int -riscv_iommu_get_dc_user(struct device *dev, struct iommu_hwpt_riscv_iommu *user_arg) +riscv_iommu_get_dc_user(struct device *dev, struct iommu_hwpt_riscv_iommu *user_arg, + struct riscv_iommu_domain *s1_domain) { struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct riscv_iommu_device *iommu = dev_to_iommu(dev); @@ -1663,6 +1743,8 @@ riscv_iommu_get_dc_user(struct device *dev, struct iommu_hwpt_riscv_iommu *user_ riscv_iommu_get_dc(iommu, fwspec->ids[i]), sizeof(struct riscv_iommu_dc)); info->dc_user.fsc = dc.fsc; + info->dc_user.ta = FIELD_PREP(RISCV_IOMMU_PC_TA_PSCID, s1_domain->pscid) | + RISCV_IOMMU_PC_TA_V; } return 0; @@ -1708,7 +1790,7 @@ riscv_iommu_domain_alloc_nested(struct device *dev, } /* Get device context of stage-1 from user*/ - ret = riscv_iommu_get_dc_user(dev, &arg); + ret = riscv_iommu_get_dc_user(dev, &arg, s1_domain); if (ret) { kfree(s1_domain); return ERR_PTR(-EINVAL); diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h index 514463fe85d3..876cbe980a42 100644 --- a/include/uapi/linux/iommufd.h +++ b/include/uapi/linux/iommufd.h @@ -653,9 +653,11 @@ struct iommu_hwpt_get_dirty_bitmap { * enum iommu_hwpt_invalidate_data_type - IOMMU HWPT Cache Invalidation * Data Type * @IOMMU_HWPT_INVALIDATE_DATA_VTD_S1: Invalidation data for VTD_S1 + * @IOMMU_HWPT_INVALIDATE_DATA_RISCV_IOMMU: Invalidation data for RISCV_IOMMU */ enum iommu_hwpt_invalidate_data_type { IOMMU_HWPT_INVALIDATE_DATA_VTD_S1, + IOMMU_HWPT_INVALIDATE_DATA_RISCV_IOMMU, }; /** @@ -694,6 +696,15 @@ struct iommu_hwpt_vtd_s1_invalidate { __u32 __reserved; }; +/** + * struct iommu_hwpt_riscv_iommu_invalidate - RISCV IOMMU cache invalidation + * (IOMMU_HWPT_TYPE_RISCV_IOMMU) + * @cmd: An array holds a command for cache invalidation + */ +struct iommu_hwpt_riscv_iommu_invalidate { + __aligned_u64 cmd[2]; +}; + /** * struct iommu_hwpt_invalidate - ioctl(IOMMU_HWPT_INVALIDATE) * @size: sizeof(struct iommu_hwpt_invalidate) From patchwork Fri Jun 14 14:21:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D973EC27C79 for ; Fri, 14 Jun 2024 14:22:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Np9ale3+5QTLZK7phM+nK4maB9gRt126CelRhdo72g8=; b=joEgoKtcG+i1yi 3c5VPAPLuzSiyqxB338SFvM1+JTPpCgsrm1f0y+v/UeVNpOehbfuoH3jO7hyc2A29RuS+B4s/iz/K IB8LMN9TP+Y13zsEXiiuuNnptqwVLI0EZulOybBvbRH1hzipugzO2fxULbT9nL1ZZEhJM0mAFxsyo H6Ve6Da01geYQczwVbWPocu8ABreJTVyEQnLMPwhk1s1JCN7oRcdA7ryjvy8L6OzzlZ8pvhf4FPqa /9ixoMs1KzgbgVOgpBeVxoYGKSQ5VxqKcJ0dGgdB65sYtBGr0KScek6641MKGN9JK/wxmCSxOxzNV WQLdZsn5dxWOQsCjuL1g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7pB-000000030AV-0H40; Fri, 14 Jun 2024 14:22:41 +0000 Received: from mail-pl1-x632.google.com ([2607:f8b0:4864:20::632]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7p4-00000003042-1CAB for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:35 +0000 Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1f4a5344ec7so16231715ad.1 for ; Fri, 14 Jun 2024 07:22:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374953; x=1718979753; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=BxtO7IOLUtPSQ+XdcXJ6pDuSrhddnSGqhgchDX3PwwI=; b=fFEOUQorfUhI4K11GibTBzjYKlIOeThF/HWk4updTb2bYIlFNNhiskUF7OoHzJbGK+ CBmnT16wXZSlSvB2deeCbC0QTPI2NVBcWdflkgainqK7O7P04gVWCKFi9tAni41KPhjm Sf7w9UA6YodJ9+ZGU8B4BDDR9oZtIRj8WcyOJAQESsYqBW5PqHWfcW/yI7x8jhTISxQe 22slRf/vbPcv33n0UdwwJsau2wKmllpI+uXUfR61vL365VgLD0d8he8i8+WO9vce9cPI xLZq4zrnH1Mrj1DcoiWuDnrEFjEx22VQXhOduhEN+vRxi2t7PTkKztdk8uByXEyS5iSy Qrxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374953; x=1718979753; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BxtO7IOLUtPSQ+XdcXJ6pDuSrhddnSGqhgchDX3PwwI=; b=DWgaLF+XUAM6nQNZns3u16PkMlQ3frRV4J/wu/g0qg9kxOp8O2+6qduX+59JcGvQEU Gx6IZ6B/vvtsKnlEhS8qNrVUID6vmvmZfUJEpyv6uFfFpLOhys1x0VHyQrvRcNEn5VH4 JppGPWOxxGhqA3GhncWo/vTzOycCKMiuKZ/TIScqMC9wv7jnEN3VqNcAVpvW2vTOFuvc SMaYCa71RqEnGsTowrU2P7TzLTG99vQha3l5Or5vhgdFVYs8Em9l7xn+6sbQHb4131V7 9oyxAgm3jqszyWdfOgTOAgV3WxBNU02/m2pUiHnece/ND+1dFwsu6GWwhe1m2TEimvJg BdmA== X-Forwarded-Encrypted: i=1; AJvYcCUUe6C6Jt+ETEyv4XoiaxZ0co55MElKc6cMxAr3Cvue4SC+FvxWVNVafi6l5+NqHpJRPhMDXJJVbvPjvHOIolZtfAeFDrF1NOCnd+FJTc+z X-Gm-Message-State: AOJu0YzZCDHUnvxY9vNkJYtpZx9J75rKeFHxhcMYl0430s3VA8TX3BNF 0Y42+DdeKL/47krkxwmrk9sv4l6pzCblr6+VvpIOSq0g1eHKy0eoTnCZwtxj6cI= X-Google-Smtp-Source: AGHT+IG11s4ZHx3p3pdh0/8168TEMxbmtzHZ+b6vnWRUAkFpilwyNQyd4ZLsjbcQgl8XVHvD/vsepw== X-Received: by 2002:a17:902:cccc:b0:1f7:126:5ba7 with SMTP id d9443c01a7336-1f84e42e7bamr92287235ad.21.1718374952848; Fri, 14 Jun 2024 07:22:32 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:32 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Nicolin Chen Subject: [RFC PATCH v2 09/10] iommu/dma: Support MSIs through nested domains Date: Fri, 14 Jun 2024 22:21:55 +0800 Message-Id: <20240614142156.29420-10-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072234_403708_CC7EFA29 X-CRM114-Status: GOOD ( 17.36 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Robin Murphy Currently, iommu-dma is the only place outside of IOMMUFD and drivers which might need to be aware of the stage 2 domain encapsulated within a nested domain. This would be in the legacy-VFIO-style case where we're using host-managed MSIs with an identity mapping at stage 1, where it is the underlying stage 2 domain which owns an MSI cookie and holds the corresponding dynamic mappings. Hook up the new op to resolve what we need from a nested domain. Signed-off-by: Robin Murphy Signed-off-by: Nicolin Chen --- drivers/iommu/dma-iommu.c | 18 ++++++++++++++++-- include/linux/iommu.h | 4 ++++ 2 files changed, 20 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index f731e4b2a417..d4235bb0a427 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1806,6 +1806,20 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, return NULL; } +/* + * Nested domains may not have an MSI cookie or accept mappings, but they may + * be related to a domain which does, so we let them tell us what they need. + */ +static struct iommu_domain *iommu_dma_get_msi_mapping_domain(struct device *dev) +{ + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + + if (domain && domain->type == IOMMU_DOMAIN_NESTED && + domain->ops->get_msi_mapping_domain) + domain = domain->ops->get_msi_mapping_domain(domain); + return domain; +} + /** * iommu_dma_prepare_msi() - Map the MSI page in the IOMMU domain * @desc: MSI descriptor, will store the MSI page @@ -1816,7 +1830,7 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev, int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr) { struct device *dev = msi_desc_to_dev(desc); - struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + struct iommu_domain *domain = iommu_dma_get_msi_mapping_domain(dev); struct iommu_dma_msi_page *msi_page; static DEFINE_MUTEX(msi_prepare_lock); /* see below */ @@ -1849,7 +1863,7 @@ int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr) void iommu_dma_compose_msi_msg(struct msi_desc *desc, struct msi_msg *msg) { struct device *dev = msi_desc_to_dev(desc); - const struct iommu_domain *domain = iommu_get_domain_for_dev(dev); + const struct iommu_domain *domain = iommu_dma_get_msi_mapping_domain(dev); const struct iommu_dma_msi_page *msi_page; msi_page = msi_desc_get_iommu_cookie(desc); diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 7bc8dff7cf6d..400df9ae7012 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -629,6 +629,8 @@ struct iommu_ops { * @enable_nesting: Enable nesting * @set_pgtable_quirks: Set io page table quirks (IO_PGTABLE_QUIRK_*) * @free: Release the domain after use. + * @get_msi_mapping_domain: Return the related iommu_domain that should hold the + * MSI cookie and accept mapping(s). */ struct iommu_domain_ops { int (*attach_dev)(struct iommu_domain *domain, struct device *dev); @@ -659,6 +661,8 @@ struct iommu_domain_ops { unsigned long quirks); void (*free)(struct iommu_domain *domain); + struct iommu_domain * + (*get_msi_mapping_domain)(struct iommu_domain *domain); }; /** From patchwork Fri Jun 14 14:21:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zong Li X-Patchwork-Id: 13698751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 45A51C27C6E for ; Fri, 14 Jun 2024 14:22:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ihXO45tS8D/wPgQzXHixcyDepbfIxHx9R3fcdgWKmyE=; b=RxF8eIGDTn3qVa mwZwLIGvVaw+6cD2N2CKUJRKH2khMks1wx2NFlwCZqVPiH3ihfW1VYaBDBRpkVvytPfOokaLKXyvC 1alV/PcZJEBfq7kNQnMXCnYwVLtpvQNLMD9rV6XmTKkhCM6Cef8GLdIl0X1EIz20V9cd6m5XduyHI B+mlD+ZjD7f6HYYWerfSZM0jaDWPzcAIXTISwqKL/o0XGR3ecVbN5my8KHqkgWi5mFZ+aBVy9VU2I gy0bupmZCcSfAhYpUwBdNBC+XBaEcBQIXITjmvhRv73GjmcBFx1JXiTTYm8OCcoHTN6dtmD7Jd8sB oWZX4HDnaNzmPyYH1o/Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7pD-000000030Bq-1q2L; Fri, 14 Jun 2024 14:22:43 +0000 Received: from mail-pl1-x62d.google.com ([2607:f8b0:4864:20::62d]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sI7p5-0000000306J-3yh9 for linux-riscv@lists.infradead.org; Fri, 14 Jun 2024 14:22:39 +0000 Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1f70c457823so18653455ad.3 for ; Fri, 14 Jun 2024 07:22:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1718374955; x=1718979755; darn=lists.infradead.org; h=references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=wDCzcwoY+7Urqf0z9zEneV8CU/qabdLYOlq0L2NAoPg=; b=mZQug5wzRxudF0cYbKD6+1X66ngcMgbopruvnNGC9zGfLibo+a+EPS9D3oXDWIs7F5 ZlCg8EnBUGt+DYus9YokzCzfQs64DH61oUl58VMOT2duKpaxZXVKabL1hbc/MuaWJXCI daoJCIILTP7SGsLLx1QSFcqlrivt3ZmVQXQJPXn3iHg6OBgt4sjxKfJeF+/aBJluKqut 1vcy9C43U3ST5GtS50LXzxnxZM2srsc8IFxsKQD+46yAIwOCDurwdIFC/4IWB1s4BJ1h E9GbrShWgt+kmqLpUCrmjczW7hOkZVYhuULb07W3dKjw9T//VEpSVmVJtXxcQH+kuZJq tVCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718374955; x=1718979755; h=references:in-reply-to:message-id:date:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wDCzcwoY+7Urqf0z9zEneV8CU/qabdLYOlq0L2NAoPg=; b=l/xPJ0fyvudz/VkZMwrm0H4NMMV4lwgeZ7KhPwdJlhv4yDZ+pa7QND+TdMbGYDAA2E KhGA55sdLq1H1BYbCwse/PImorFjXN8cMbaGt3uhE+20xk3BytLNJ5S04ZIAjyTxF0oj DdPeAW1CWvcAlVYR+iJiBPcuMnbXsqGk22oqCCFKbXCU2WJbifvWzZ4NHCZqJWJ3QrAg E+Qitv3QxtzQRJcQuUBbqxbnMpNwyoebt0EWeIftwBiBV0geJOTzNDSdZwP7v9bxQq/Z nzQS2AnNbRxb7qYudL65LHPxeLAXFsXoGgLdavrDtLmXuyhXL8dTjHG2KuVuZasAcXik tE6w== X-Forwarded-Encrypted: i=1; AJvYcCXGmi6gu/GD/JrhUvf8pV0Pbpt1hX7ajnGLatvzJdLizlZr6KiOKFdxburG9xfAB3QdpIRm+lo/vBafjKgiQJgGp5I4Ts1Qbc/CyS3/gFbn X-Gm-Message-State: AOJu0YzagUfgH6d6wWOQslqgymv5JU/jxpnAJEjsfC9PzN9OoBFcN7gN u1yB8fm2smZOS2dh/H1sHBkNkOi5LV6XRdCxxaGxQKJYVUzVstMStXBAaEFCP3I= X-Google-Smtp-Source: AGHT+IFNWiib9Mbp3WUeDaTWc2pLd5fn9/fMVeRdDHjkyMdQAAUYNKmmTVlC+NReoIyOXuSV1VHXpQ== X-Received: by 2002:a17:902:f68e:b0:1f7:2849:183f with SMTP id d9443c01a7336-1f8625c243amr33675115ad.1.1718374955357; Fri, 14 Jun 2024 07:22:35 -0700 (PDT) Received: from hsinchu26.internal.sifive.com (59-124-168-89.hinet-ip.hinet.net. [59.124.168.89]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855e559d9sm32522005ad.35.2024.06.14.07.22.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Jun 2024 07:22:35 -0700 (PDT) From: Zong Li To: joro@8bytes.org, will@kernel.org, robin.murphy@arm.com, tjeznach@rivosinc.com, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, jgg@ziepe.ca, kevin.tian@intel.com, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-riscv@lists.infradead.org Cc: Zong Li Subject: [RFC PATCH v2 10/10] iommu:riscv: support nested iommu for get_msi_mapping_domain operation Date: Fri, 14 Jun 2024 22:21:56 +0800 Message-Id: <20240614142156.29420-11-zong.li@sifive.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240614142156.29420-1-zong.li@sifive.com> References: <20240614142156.29420-1-zong.li@sifive.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240614_072236_114779_4F2DD1FF X-CRM114-Status: UNSURE ( 8.34 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org Return the iommu_domain that should hold the MSI cookie. Signed-off-by: Zong Li --- drivers/iommu/riscv/iommu.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/iommu/riscv/iommu.c b/drivers/iommu/riscv/iommu.c index d08eb0a2939e..969a0ba32c9e 100644 --- a/drivers/iommu/riscv/iommu.c +++ b/drivers/iommu/riscv/iommu.c @@ -1689,10 +1689,22 @@ static int riscv_iommu_cache_invalidate_user(struct iommu_domain *domain, return ret; } +static struct iommu_domain * +riscv_iommu_get_msi_mapping_domain(struct iommu_domain *domain) +{ + struct riscv_iommu_domain *riscv_domain = iommu_domain_to_riscv(domain); + + if (riscv_domain->s2) + return &riscv_domain->s2->domain; + + return domain; +} + static const struct iommu_domain_ops riscv_iommu_nested_domain_ops = { .attach_dev = riscv_iommu_attach_dev_nested, .free = riscv_iommu_domain_free_nested, .cache_invalidate_user = riscv_iommu_cache_invalidate_user, + .get_msi_mapping_domain = riscv_iommu_get_msi_mapping_domain, }; static int