From patchwork Thu Dec 1 16:02:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98185C43217 for ; Thu, 1 Dec 2022 16:04:16 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450913.708429 (Exim 4.92) (envelope-from ) id 1p0m2e-0000yA-2z; Thu, 01 Dec 2022 16:04:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450913.708429; Thu, 01 Dec 2022 16:04:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m2d-0000y3-Vj; Thu, 01 Dec 2022 16:04:03 +0000 Received: by outflank-mailman (input) for mailman id 450913; Thu, 01 Dec 2022 16:04:02 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m2c-0008HE-36 for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:04:02 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id c4d00774-7191-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:04:00 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B961DD6E; Thu, 1 Dec 2022 08:04:06 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AADCA3F67D; Thu, 1 Dec 2022 08:03:58 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c4d00774-7191-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Jean-Philippe Brucker , Bertrand Marquis , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Jonathan Cameron , Eric Auger , Keqian Zhu , Will Deacon , Joerg Roedel Subject: [RFC PATCH 01/21] xen/arm: smmuv3: Maintain a SID->device structure Date: Thu, 1 Dec 2022 16:02:25 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 From: Jean-Philippe Brucker Backport Linux commit cdf315f907d4. This is the clean backport without any changes. When handling faults from the event or PRI queue, we need to find the struct device associated with a SID. Add a rb_tree to keep track of SIDs. Acked-by: Jonathan Cameron Reviewed-by: Eric Auger Reviewed-by: Keqian Zhu Signed-off-by: Jean-Philippe Brucker Acked-by: Will Deacon Link: https://lore.kernel.org/r/20210401154718.307519-8-jean-philippe@linaro.org Signed-off-by: Joerg Roedel Origin: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git cdf315f907d4 Signed-off-by: Rahul Singh --- xen/drivers/passthrough/arm/smmu-v3.c | 131 +++++++++++++++++++++----- xen/drivers/passthrough/arm/smmu-v3.h | 13 ++- 2 files changed, 118 insertions(+), 26 deletions(-) diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c index 9c9f463009..cbef3f8b36 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.c +++ b/xen/drivers/passthrough/arm/smmu-v3.c @@ -810,6 +810,27 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return 0; } +__maybe_unused +static struct arm_smmu_master * +arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) +{ + struct rb_node *node; + struct arm_smmu_stream *stream; + + node = smmu->streams.rb_node; + while (node) { + stream = rb_entry(node, struct arm_smmu_stream, node); + if (stream->id < sid) + node = node->rb_right; + else if (stream->id > sid) + node = node->rb_left; + else + return stream->master; + } + + return NULL; +} + /* IRQ and event handlers */ static void arm_smmu_evtq_tasklet(void *dev) { @@ -1047,8 +1068,8 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master *master, if (!master->ats_enabled) return 0; - for (i = 0; i < master->num_sids; i++) { - cmd->atc.sid = master->sids[i]; + for (i = 0; i < master->num_streams; i++) { + cmd->atc.sid = master->streams[i].id; arm_smmu_cmdq_issue_cmd(master->smmu, cmd); } @@ -1276,13 +1297,13 @@ static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master) int i, j; struct arm_smmu_device *smmu = master->smmu; - for (i = 0; i < master->num_sids; ++i) { - u32 sid = master->sids[i]; + for (i = 0; i < master->num_streams; ++i) { + u32 sid = master->streams[i].id; __le64 *step = arm_smmu_get_step_for_sid(smmu, sid); /* Bridged PCI devices may end up with duplicated IDs */ for (j = 0; j < i; j++) - if (master->sids[j] == sid) + if (master->streams[j].id == sid) break; if (j < i) continue; @@ -1489,12 +1510,86 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) return sid < limit; } + +static int arm_smmu_insert_master(struct arm_smmu_device *smmu, + struct arm_smmu_master *master) +{ + int i; + int ret = 0; + struct arm_smmu_stream *new_stream, *cur_stream; + struct rb_node **new_node, *parent_node = NULL; + struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(master->dev); + + master->streams = _xzalloc_array(sizeof(*master->streams), sizeof(void *), + fwspec->num_ids); + if (!master->streams) + return -ENOMEM; + master->num_streams = fwspec->num_ids; + + mutex_lock(&smmu->streams_mutex); + for (i = 0; i < fwspec->num_ids; i++) { + u32 sid = fwspec->ids[i]; + + new_stream = &master->streams[i]; + new_stream->id = sid; + new_stream->master = master; + + /* + * Check the SIDs are in range of the SMMU and our stream table + */ + if (!arm_smmu_sid_in_range(smmu, sid)) { + ret = -ERANGE; + break; + } + + /* Ensure l2 strtab is initialised */ + if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { + ret = arm_smmu_init_l2_strtab(smmu, sid); + if (ret) + break; + } + + /* Insert into SID tree */ + new_node = &(smmu->streams.rb_node); + while (*new_node) { + cur_stream = rb_entry(*new_node, struct arm_smmu_stream, + node); + parent_node = *new_node; + if (cur_stream->id > new_stream->id) { + new_node = &((*new_node)->rb_left); + } else if (cur_stream->id < new_stream->id) { + new_node = &((*new_node)->rb_right); + } else { + dev_warn(master->dev, + "stream %u already in tree\n", + cur_stream->id); + ret = -EINVAL; + break; + } + } + if (ret) + break; + + rb_link_node(&new_stream->node, parent_node, new_node); + rb_insert_color(&new_stream->node, &smmu->streams); + } + + if (ret) { + for (i--; i >= 0; i--) + rb_erase(&master->streams[i].node, &smmu->streams); + xfree(master->streams); + } + mutex_unlock(&smmu->streams_mutex); + + return ret; +} + /* Forward declaration */ static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev); static int arm_smmu_add_device(u8 devfn, struct device *dev) { - int i, ret; + int ret; struct arm_smmu_device *smmu; struct arm_smmu_master *master; struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); @@ -1512,26 +1607,11 @@ static int arm_smmu_add_device(u8 devfn, struct device *dev) master->dev = dev; master->smmu = smmu; - master->sids = fwspec->ids; - master->num_sids = fwspec->num_ids; dev_iommu_priv_set(dev, master); - /* Check the SIDs are in range of the SMMU and our stream table */ - for (i = 0; i < master->num_sids; i++) { - u32 sid = master->sids[i]; - - if (!arm_smmu_sid_in_range(smmu, sid)) { - ret = -ERANGE; - goto err_free_master; - } - - /* Ensure l2 strtab is initialised */ - if (smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB) { - ret = arm_smmu_init_l2_strtab(smmu, sid); - if (ret) - goto err_free_master; - } - } + ret = arm_smmu_insert_master(smmu, master); + if (ret) + goto err_free_master; /* * Note that PASID must be enabled before, and disabled after ATS: @@ -1752,6 +1832,9 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu) { int ret; + mutex_init(&smmu->streams_mutex); + smmu->streams = RB_ROOT; + ret = arm_smmu_init_queues(smmu); if (ret) return ret; diff --git a/xen/drivers/passthrough/arm/smmu-v3.h b/xen/drivers/passthrough/arm/smmu-v3.h index b381ad3738..b3bc7d64c7 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.h +++ b/xen/drivers/passthrough/arm/smmu-v3.h @@ -637,6 +637,15 @@ struct arm_smmu_device { struct tasklet evtq_irq_tasklet; struct tasklet priq_irq_tasklet; struct tasklet combined_irq_tasklet; + + struct rb_root streams; + struct mutex streams_mutex; +}; + +struct arm_smmu_stream { + u32 id; + struct arm_smmu_master *master; + struct rb_node node; }; /* SMMU private data for each master */ @@ -645,8 +654,8 @@ struct arm_smmu_master { struct device *dev; struct arm_smmu_domain *domain; struct list_head domain_head; - u32 *sids; - unsigned int num_sids; + struct arm_smmu_stream *streams; + unsigned int num_streams; bool ats_enabled; }; From patchwork Thu Dec 1 16:02:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CFE79C43217 for ; Thu, 1 Dec 2022 16:05:05 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450918.708440 (Exim 4.92) (envelope-from ) id 1p0m3R-0001Um-CJ; Thu, 01 Dec 2022 16:04:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450918.708440; Thu, 01 Dec 2022 16:04:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m3R-0001Uf-9D; Thu, 01 Dec 2022 16:04:53 +0000 Received: by outflank-mailman (input) for mailman id 450918; Thu, 01 Dec 2022 16:04:52 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m3Q-0001UO-2L for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:04:52 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id e29d28b4-7191-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:04:50 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AFF53ED1; Thu, 1 Dec 2022 08:04:56 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4D6D43F67D; Thu, 1 Dec 2022 08:04:49 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e29d28b4-7191-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Bertrand Marquis , Stefano Stabellini , Julien Grall , Volodymyr Babchuk Subject: [RFC PATCH 02/21] xen/arm: smmuv3: Add support for stage-1 and nested stage translation Date: Thu, 1 Dec 2022 16:02:26 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Xen SMMUv3 driver only supports stage-2 translation. Add support for Stage-1 translation that is required to support nested stage translation. In true nested mode, both s1_cfg and s2_cfg will coexist. Let's remove the union. When nested stage translation is setup, both s1_cfg and s2_cfg are valid. We introduce a new smmu_domain abort field that will be set upon guest stage-1 configuration passing. If no guest stage-1 config has been attached, it is ignored when writing the STE. arm_smmu_write_strtab_ent() is modified to write both stage fields in the STE and deal with the abort field. Signed-off-by: Rahul Singh --- xen/drivers/passthrough/arm/smmu-v3.c | 94 +++++++++++++++++++++++---- xen/drivers/passthrough/arm/smmu-v3.h | 9 +++ 2 files changed, 92 insertions(+), 11 deletions(-) diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c index cbef3f8b36..866fe8de4d 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.c +++ b/xen/drivers/passthrough/arm/smmu-v3.c @@ -686,8 +686,10 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, * 3. Update Config, sync */ u64 val = le64_to_cpu(dst[0]); - bool ste_live = false; + bool s1_live = false, s2_live = false, ste_live = false; + bool abort, translate = false; struct arm_smmu_device *smmu = NULL; + struct arm_smmu_s1_cfg *s1_cfg = NULL; struct arm_smmu_s2_cfg *s2_cfg = NULL; struct arm_smmu_domain *smmu_domain = NULL; struct arm_smmu_cmdq_ent prefetch_cmd = { @@ -702,30 +704,54 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, smmu = master->smmu; } - if (smmu_domain) - s2_cfg = &smmu_domain->s2_cfg; + if (smmu_domain) { + switch (smmu_domain->stage) { + case ARM_SMMU_DOMAIN_NESTED: + s1_cfg = &smmu_domain->s1_cfg; + fallthrough; + case ARM_SMMU_DOMAIN_S2: + s2_cfg = &smmu_domain->s2_cfg; + break; + default: + break; + } + translate = !!s1_cfg || !!s2_cfg; + } if (val & STRTAB_STE_0_V) { switch (FIELD_GET(STRTAB_STE_0_CFG, val)) { case STRTAB_STE_0_CFG_BYPASS: break; + case STRTAB_STE_0_CFG_S1_TRANS: + s1_live = true; + break; case STRTAB_STE_0_CFG_S2_TRANS: - ste_live = true; + s2_live = true; + break; + case STRTAB_STE_0_CFG_NESTED: + s1_live = true; + s2_live = true; break; case STRTAB_STE_0_CFG_ABORT: - BUG_ON(!disable_bypass); break; default: BUG(); /* STE corruption */ } } + ste_live = s1_live || s2_live; + /* Nuke the existing STE_0 value, as we're going to rewrite it */ val = STRTAB_STE_0_V; /* Bypass/fault */ - if (!smmu_domain || !(s2_cfg)) { - if (!smmu_domain && disable_bypass) + if (!smmu_domain) + abort = disable_bypass; + else + abort = smmu_domain->abort; + + if (abort || !translate) { + if (abort) val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT); else val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS); @@ -743,8 +769,39 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, return; } + if (ste_live) { + /* First invalidate the live STE */ + dst[0] = cpu_to_le64(STRTAB_STE_0_CFG_ABORT); + arm_smmu_sync_ste_for_sid(smmu, sid); + } + + if (s1_cfg) { + BUG_ON(s1_live); + dst[1] = cpu_to_le64( + FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) | + FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) | + FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) | + FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) | + FIELD_PREP(STRTAB_STE_1_STRW, STRTAB_STE_1_STRW_NSEL1)); + + if (smmu->features & ARM_SMMU_FEAT_STALLS && + !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE)) + dst[1] |= cpu_to_le64(STRTAB_STE_1_S1STALLD); + + val |= (s1_cfg->s1ctxptr & STRTAB_STE_0_S1CTXPTR_MASK) | + FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S1_TRANS) | + FIELD_PREP(STRTAB_STE_0_S1CDMAX, s1_cfg->s1cdmax) | + FIELD_PREP(STRTAB_STE_0_S1FMT, s1_cfg->s1fmt); + } + if (s2_cfg) { - BUG_ON(ste_live); + u64 vttbr = s2_cfg->vttbr & STRTAB_STE_3_S2TTB_MASK; + + if (s2_live) { + u64 s2ttb = le64_to_cpu(dst[3]) & STRTAB_STE_3_S2TTB_MASK; + BUG_ON(s2ttb != vttbr); + } + dst[2] = cpu_to_le64( FIELD_PREP(STRTAB_STE_2_S2VMID, s2_cfg->vmid) | FIELD_PREP(STRTAB_STE_2_VTCR, s2_cfg->vtcr) | @@ -754,9 +811,12 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2AA64 | STRTAB_STE_2_S2R); - dst[3] = cpu_to_le64(s2_cfg->vttbr & STRTAB_STE_3_S2TTB_MASK); + dst[3] = cpu_to_le64(vttbr); val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS); + } else { + dst[2] = 0; + dst[3] = 0; } if (master->ats_enabled) @@ -1259,6 +1319,15 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain, { int ret; struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; + + if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED && + (!(smmu->features & ARM_SMMU_FEAT_TRANS_S1) || + !(smmu->features & ARM_SMMU_FEAT_TRANS_S2))) { + dev_info(smmu_domain->smmu->dev, + "does not implement two stages\n"); + return -EINVAL; + } /* Restrict the stage to what we can actually support */ smmu_domain->stage = ARM_SMMU_DOMAIN_S2; @@ -2307,11 +2376,14 @@ static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu) smmu->features |= ARM_SMMU_FEAT_STALLS; } + if (reg & IDR0_S1P) + smmu->features |= ARM_SMMU_FEAT_TRANS_S1; + if (reg & IDR0_S2P) smmu->features |= ARM_SMMU_FEAT_TRANS_S2; - if (!(reg & IDR0_S2P)) { - dev_err(smmu->dev, "no stage-2 translation support!\n"); + if (!(reg & (IDR0_S1P | IDR0_S2P))) { + dev_err(smmu->dev, "no translation support!\n"); return -ENXIO; } diff --git a/xen/drivers/passthrough/arm/smmu-v3.h b/xen/drivers/passthrough/arm/smmu-v3.h index b3bc7d64c7..e270fe05e0 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.h +++ b/xen/drivers/passthrough/arm/smmu-v3.h @@ -197,6 +197,7 @@ #define STRTAB_STE_0_CFG_BYPASS 4 #define STRTAB_STE_0_CFG_S1_TRANS 5 #define STRTAB_STE_0_CFG_S2_TRANS 6 +#define STRTAB_STE_0_CFG_NESTED 7 #define STRTAB_STE_0_S1FMT GENMASK_ULL(5, 4) #define STRTAB_STE_0_S1FMT_LINEAR 0 @@ -547,6 +548,12 @@ struct arm_smmu_strtab_l1_desc { dma_addr_t l2ptr_dma; }; +struct arm_smmu_s1_cfg { + u64 s1ctxptr; + u8 s1fmt; + u8 s1cdmax; +}; + struct arm_smmu_s2_cfg { u16 vmid; u64 vttbr; @@ -667,7 +674,9 @@ struct arm_smmu_domain { atomic_t nr_ats_masters; enum arm_smmu_domain_stage stage; + struct arm_smmu_s1_cfg s1_cfg; struct arm_smmu_s2_cfg s2_cfg; + bool abort; /* Xen domain associated with this SMMU domain */ struct domain *d; From patchwork Thu Dec 1 16:02:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 485CBC43217 for ; Thu, 1 Dec 2022 16:05:49 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450926.708462 (Exim 4.92) (envelope-from ) id 1p0m4C-0002Wh-Vz; Thu, 01 Dec 2022 16:05:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450926.708462; Thu, 01 Dec 2022 16:05:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m4C-0002Wa-RW; Thu, 01 Dec 2022 16:05:40 +0000 Received: by outflank-mailman (input) for mailman id 450926; Thu, 01 Dec 2022 16:05:40 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m4C-0001G4-0i for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:05:40 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id ff7e8dd5-7191-11ed-8fd2-01056ac49cbb; Thu, 01 Dec 2022 17:05:39 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 31615ED1; Thu, 1 Dec 2022 08:05:45 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C2E163F73B; Thu, 1 Dec 2022 08:05:37 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ff7e8dd5-7191-11ed-8fd2-01056ac49cbb From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Bertrand Marquis , Stefano Stabellini , Julien Grall , Volodymyr Babchuk Subject: [RFC PATCH 03/21] xen/arm: smmuv3: Alloc io_domain for each device Date: Thu, 1 Dec 2022 16:02:27 +0000 Message-Id: <9dcf6a14c77db281933c0a5a19a58b0454ce587c.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 In current implementation io_domain is allocated once for each xen domain as Stage2 translation is common for all devices in same xen domain. Nested stage supports S1 and S2 configuration at the same time. Stage1 translation will be different for each device as linux kernel will allocate page-table for each device. Alloc io_domain for each device so that each device can have different Stage-1 and Stage-2 configuration structure. Signed-off-by: Rahul Singh --- xen/drivers/passthrough/arm/smmu-v3.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c index 866fe8de4d..9174d2dedd 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.c +++ b/xen/drivers/passthrough/arm/smmu-v3.c @@ -2753,11 +2753,13 @@ static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev) static struct iommu_domain *arm_smmu_get_domain(struct domain *d, struct device *dev) { + unsigned long flags; struct iommu_domain *io_domain; struct arm_smmu_domain *smmu_domain; struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv; struct arm_smmu_device *smmu = arm_smmu_get_by_dev(fwspec->iommu_dev); + struct arm_smmu_master *master; if (!smmu) return NULL; @@ -2768,8 +2770,15 @@ static struct iommu_domain *arm_smmu_get_domain(struct domain *d, */ list_for_each_entry(io_domain, &xen_domain->contexts, list) { smmu_domain = to_smmu_domain(io_domain); - if (smmu_domain->smmu == smmu) - return io_domain; + + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + if (master->dev == dev) { + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + return io_domain; + } + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); } return NULL; } From patchwork Thu Dec 1 16:02:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E8DD3C43217 for ; Thu, 1 Dec 2022 16:06:26 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450937.708473 (Exim 4.92) (envelope-from ) id 1p0m4m-0003E5-Bq; Thu, 01 Dec 2022 16:06:16 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450937.708473; Thu, 01 Dec 2022 16:06:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m4m-0003Du-8z; Thu, 01 Dec 2022 16:06:16 +0000 Received: by outflank-mailman (input) for mailman id 450937; Thu, 01 Dec 2022 16:06:14 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m4k-0001UO-80 for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:06:14 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 13b66383-7192-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:06:13 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1A3EDED1; Thu, 1 Dec 2022 08:06:19 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 37B9B3F73B; Thu, 1 Dec 2022 08:06:11 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 13b66383-7192-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Jan Beulich , Paul Durrant , =?utf-8?q?Rog?= =?utf-8?q?er_Pau_Monn=C3=A9?= Subject: [RFC PATCH 04/21] xen/arm: vIOMMU: add generic vIOMMU framework Date: Thu, 1 Dec 2022 16:02:28 +0000 Message-Id: <505b4566579b65afa0696c3a8772416a4c7cf59f.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 This patch adds basic framework for vIOMMU. Signed-off-by: Rahul Singh --- xen/arch/arm/domain.c | 17 +++++++ xen/arch/arm/domain_build.c | 3 ++ xen/arch/arm/include/asm/viommu.h | 70 ++++++++++++++++++++++++++++ xen/drivers/passthrough/Kconfig | 6 +++ xen/drivers/passthrough/arm/Makefile | 1 + xen/drivers/passthrough/arm/viommu.c | 48 +++++++++++++++++++ xen/include/public/arch-arm.h | 4 ++ 7 files changed, 149 insertions(+) create mode 100644 xen/arch/arm/include/asm/viommu.h create mode 100644 xen/drivers/passthrough/arm/viommu.c diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 38e22f12af..2a85209736 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include "vpci.h" @@ -691,6 +692,13 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config) return -EINVAL; } + if ( config->arch.viommu_type != XEN_DOMCTL_CONFIG_VIOMMU_NONE ) + { + dprintk(XENLOG_INFO, + "vIOMMU type requested not supported by the platform or Xen\n"); + return -EINVAL; + } + return 0; } @@ -783,6 +791,9 @@ int arch_domain_create(struct domain *d, if ( (rc = domain_vpci_init(d)) != 0 ) goto fail; + if ( (rc = domain_viommu_init(d, config->arch.viommu_type)) != 0 ) + goto fail; + return 0; fail: @@ -998,6 +1009,7 @@ static int relinquish_memory(struct domain *d, struct page_list_head *list) enum { PROG_pci = 1, PROG_tee, + PROG_viommu, PROG_xen, PROG_page, PROG_mapping, @@ -1048,6 +1060,11 @@ int domain_relinquish_resources(struct domain *d) if (ret ) return ret; + PROGRESS(viommu): + ret = viommu_relinquish_resources(d); + if (ret ) + return ret; + PROGRESS(xen): ret = relinquish_memory(d, &d->xenpage_list); if ( ret ) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index bd30d3798c..abbaf37a2e 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include @@ -3858,6 +3859,7 @@ void __init create_domUs(void) struct domain *d; struct xen_domctl_createdomain d_cfg = { .arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE, + .arch.viommu_type = viommu_get_type(), .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap, /* * The default of 1023 should be sufficient for guests because @@ -4052,6 +4054,7 @@ void __init create_dom0(void) printk(XENLOG_WARNING "Maximum number of vGIC IRQs exceeded.\n"); dom0_cfg.arch.tee_type = tee_get_type(); dom0_cfg.max_vcpus = dom0_max_vcpus(); + dom0_cfg.arch.viommu_type = viommu_get_type(); if ( iommu_enabled ) dom0_cfg.flags |= XEN_DOMCTL_CDF_iommu; diff --git a/xen/arch/arm/include/asm/viommu.h b/xen/arch/arm/include/asm/viommu.h new file mode 100644 index 0000000000..7cd3818a12 --- /dev/null +++ b/xen/arch/arm/include/asm/viommu.h @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */ +#ifndef __ARCH_ARM_VIOMMU_H__ +#define __ARCH_ARM_VIOMMU_H__ + +#ifdef CONFIG_VIRTUAL_IOMMU + +#include +#include +#include + +struct viommu_ops { + /* + * Called during domain construction if toolstack requests to enable + * vIOMMU support. + */ + int (*domain_init)(struct domain *d); + + /* + * Called during domain destruction to free resources used by vIOMMU. + */ + int (*relinquish_resources)(struct domain *d); +}; + +struct viommu_desc { + /* vIOMMU domains init/free operations described above. */ + const struct viommu_ops *ops; + + /* + * ID of vIOMMU. Corresponds to xen_arch_domainconfig.viommu_type. + * Should be one of XEN_DOMCTL_CONFIG_VIOMMU_xxx + */ + uint16_t viommu_type; +}; + +int domain_viommu_init(struct domain *d, uint16_t viommu_type); +int viommu_relinquish_resources(struct domain *d); +uint16_t viommu_get_type(void); + +#else + +static inline uint8_t viommu_get_type(void) +{ + return XEN_DOMCTL_CONFIG_VIOMMU_NONE; +} + +static inline int domain_viommu_init(struct domain *d, uint16_t viommu_type) +{ + if ( likely(viommu_type == XEN_DOMCTL_CONFIG_VIOMMU_NONE) ) + return 0; + + return -ENODEV; +} + +static inline int viommu_relinquish_resources(struct domain *d) +{ + return 0; +} + +#endif /* CONFIG_VIRTUAL_IOMMU */ + +#endif /* __ARCH_ARM_VIOMMU_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig index 479d7de57a..19924fa2de 100644 --- a/xen/drivers/passthrough/Kconfig +++ b/xen/drivers/passthrough/Kconfig @@ -35,6 +35,12 @@ config IPMMU_VMSA (H3 ES3.0, M3-W+, etc) or Gen4 SoCs which IPMMU hardware supports stage 2 translation table format and is able to use CPU's P2M table as is. +config VIRTUAL_IOMMU + bool "Virtual IOMMU Support (UNSUPPORTED)" if UNSUPPORTED + default n + help + Support virtual IOMMU infrastructure to implement vIOMMU. + endif config IOMMU_FORCE_PT_SHARE diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile index c5fb3b58a5..4cc54f3f4d 100644 --- a/xen/drivers/passthrough/arm/Makefile +++ b/xen/drivers/passthrough/arm/Makefile @@ -2,3 +2,4 @@ obj-y += iommu.o iommu_helpers.o iommu_fwspec.o obj-$(CONFIG_ARM_SMMU) += smmu.o obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o obj-$(CONFIG_ARM_SMMU_V3) += smmu-v3.o +obj-$(CONFIG_VIRTUAL_IOMMU) += viommu.o diff --git a/xen/drivers/passthrough/arm/viommu.c b/xen/drivers/passthrough/arm/viommu.c new file mode 100644 index 0000000000..7ab6061e34 --- /dev/null +++ b/xen/drivers/passthrough/arm/viommu.c @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */ + +#include +#include +#include + +#include + +const struct viommu_desc __read_mostly *cur_viommu; + +int domain_viommu_init(struct domain *d, uint16_t viommu_type) +{ + if ( viommu_type == XEN_DOMCTL_CONFIG_VIOMMU_NONE ) + return 0; + + if ( !cur_viommu ) + return -ENODEV; + + if ( cur_viommu->viommu_type != viommu_type ) + return -EINVAL; + + return cur_viommu->ops->domain_init(d); +} + +int viommu_relinquish_resources(struct domain *d) +{ + if ( !cur_viommu ) + return 0; + + return cur_viommu->ops->relinquish_resources(d); +} + +uint16_t viommu_get_type(void) +{ + if ( !cur_viommu ) + return XEN_DOMCTL_CONFIG_VIOMMU_NONE; + + return cur_viommu->viommu_type; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index 1528ced509..33d32835e7 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -297,10 +297,14 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t); #define XEN_DOMCTL_CONFIG_TEE_NONE 0 #define XEN_DOMCTL_CONFIG_TEE_OPTEE 1 +#define XEN_DOMCTL_CONFIG_VIOMMU_NONE 0 + struct xen_arch_domainconfig { /* IN/OUT */ uint8_t gic_version; /* IN */ + uint8_t viommu_type; + /* IN */ uint16_t tee_type; /* IN */ uint32_t nr_spis; From patchwork Thu Dec 1 16:02:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061536 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BF71CC43217 for ; Thu, 1 Dec 2022 16:11:46 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450982.708539 (Exim 4.92) (envelope-from ) id 1p0m9w-0007v7-5w; Thu, 01 Dec 2022 16:11:36 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450982.708539; Thu, 01 Dec 2022 16:11:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m9w-0007v0-1p; Thu, 01 Dec 2022 16:11:36 +0000 Received: by outflank-mailman (input) for mailman id 450982; Thu, 01 Dec 2022 16:11:35 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m5A-0001G4-2n for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:06:40 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 22f967f6-7192-11ed-8fd2-01056ac49cbb; Thu, 01 Dec 2022 17:06:38 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9D00BED1; Thu, 1 Dec 2022 08:06:44 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DDA703F73B; Thu, 1 Dec 2022 08:06:36 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 22f967f6-7192-11ed-8fd2-01056ac49cbb From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Jan Beulich , Paul Durrant , =?utf-8?q?Rog?= =?utf-8?q?er_Pau_Monn=C3=A9?= Subject: [RFC PATCH 05/21] xen/arm: vsmmuv3: Add dummy support for virtual SMMUv3 for guests Date: Thu, 1 Dec 2022 16:02:29 +0000 Message-Id: <6d25bcb543190d78c26db15a0f07e9af79349884.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 domain_viommu_init() will be called during domain creation and will add the dummy trap handler for virtual IOMMUs for guests. A host IOMMU list will be created when host IOMMU devices are probed and this list will be used to create the IOMMU device tree node for dom0. For dom0, 1-1 mapping will be established between vIOMMU in dom0 and physical IOMMU. For domUs, the 1-N mapping will be established between domU and physical IOMMUs. A new area has been reserved in the arm guest physical map at which the emulated vIOMMU node is created in the device tree. Also set the vIOMMU type to vSMMUv3 to enable vIOMMU framework to call vSMMUv3 domain creation/destroy functions. Signed-off-by: Rahul Singh --- xen/arch/arm/domain.c | 3 +- xen/arch/arm/include/asm/domain.h | 4 + xen/arch/arm/include/asm/viommu.h | 20 ++++ xen/drivers/passthrough/Kconfig | 8 ++ xen/drivers/passthrough/arm/Makefile | 1 + xen/drivers/passthrough/arm/smmu-v3.c | 7 ++ xen/drivers/passthrough/arm/viommu.c | 30 ++++++ xen/drivers/passthrough/arm/vsmmu-v3.c | 124 +++++++++++++++++++++++++ xen/drivers/passthrough/arm/vsmmu-v3.h | 20 ++++ xen/include/public/arch-arm.h | 7 +- 10 files changed, 222 insertions(+), 2 deletions(-) create mode 100644 xen/drivers/passthrough/arm/vsmmu-v3.c create mode 100644 xen/drivers/passthrough/arm/vsmmu-v3.h diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 2a85209736..9a2b613500 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -692,7 +692,8 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config) return -EINVAL; } - if ( config->arch.viommu_type != XEN_DOMCTL_CONFIG_VIOMMU_NONE ) + if ( config->arch.viommu_type != XEN_DOMCTL_CONFIG_VIOMMU_NONE && + config->arch.viommu_type != viommu_get_type() ) { dprintk(XENLOG_INFO, "vIOMMU type requested not supported by the platform or Xen\n"); diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h index 2ce6764322..8eb4eb5fd6 100644 --- a/xen/arch/arm/include/asm/domain.h +++ b/xen/arch/arm/include/asm/domain.h @@ -114,6 +114,10 @@ struct arch_domain void *tee; #endif +#ifdef CONFIG_VIRTUAL_IOMMU + struct list_head viommu_list; /* List of virtual IOMMUs */ +#endif + } __cacheline_aligned; struct arch_vcpu diff --git a/xen/arch/arm/include/asm/viommu.h b/xen/arch/arm/include/asm/viommu.h index 7cd3818a12..4785877e2a 100644 --- a/xen/arch/arm/include/asm/viommu.h +++ b/xen/arch/arm/include/asm/viommu.h @@ -5,9 +5,21 @@ #ifdef CONFIG_VIRTUAL_IOMMU #include +#include #include #include +extern struct list_head host_iommu_list; + +/* data structure for each hardware IOMMU */ +struct host_iommu { + struct list_head entry; + const struct dt_device_node *dt_node; + paddr_t addr; + paddr_t size; + uint32_t irq; +}; + struct viommu_ops { /* * Called during domain construction if toolstack requests to enable @@ -35,6 +47,8 @@ struct viommu_desc { int domain_viommu_init(struct domain *d, uint16_t viommu_type); int viommu_relinquish_resources(struct domain *d); uint16_t viommu_get_type(void); +void add_to_host_iommu_list(paddr_t addr, paddr_t size, + const struct dt_device_node *node); #else @@ -56,6 +70,12 @@ static inline int viommu_relinquish_resources(struct domain *d) return 0; } +static inline void add_to_host_iommu_list(paddr_t addr, paddr_t size, + const struct dt_device_node *node) +{ + return; +} + #endif /* CONFIG_VIRTUAL_IOMMU */ #endif /* __ARCH_ARM_VIOMMU_H__ */ diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig index 19924fa2de..4c725f5f67 100644 --- a/xen/drivers/passthrough/Kconfig +++ b/xen/drivers/passthrough/Kconfig @@ -41,6 +41,14 @@ config VIRTUAL_IOMMU help Support virtual IOMMU infrastructure to implement vIOMMU. +config VIRTUAL_ARM_SMMU_V3 + bool "ARM Ltd. Virtual SMMUv3 Support (UNSUPPORTED)" if UNSUPPORTED + depends on ARM_SMMU_V3 && VIRTUAL_IOMMU + help + Support for implementations of the virtual ARM System MMU architecture + version 3. Virtual SMMUv3 is unsupported feature and should not be used + in production. + endif config IOMMU_FORCE_PT_SHARE diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile index 4cc54f3f4d..e758a9d6aa 100644 --- a/xen/drivers/passthrough/arm/Makefile +++ b/xen/drivers/passthrough/arm/Makefile @@ -3,3 +3,4 @@ obj-$(CONFIG_ARM_SMMU) += smmu.o obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o obj-$(CONFIG_ARM_SMMU_V3) += smmu-v3.o obj-$(CONFIG_VIRTUAL_IOMMU) += viommu.o +obj-$(CONFIG_VIRTUAL_ARM_SMMU_V3) += vsmmu-v3.o diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c index 9174d2dedd..4f96fdb92f 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.c +++ b/xen/drivers/passthrough/arm/smmu-v3.c @@ -91,6 +91,7 @@ #include #include "smmu-v3.h" +#include "vsmmu-v3.h" #define ARM_SMMU_VTCR_SH_IS 3 #define ARM_SMMU_VTCR_RGN_WBWA 1 @@ -2680,6 +2681,9 @@ static int arm_smmu_device_probe(struct platform_device *pdev) list_add(&smmu->devices, &arm_smmu_devices); spin_unlock(&arm_smmu_devices_lock); + /* Add to host IOMMU list to initialize vIOMMU for dom0 */ + add_to_host_iommu_list(ioaddr, iosize, dev_to_dt(pdev)); + return 0; @@ -2936,6 +2940,9 @@ static __init int arm_smmu_dt_init(struct dt_device_node *dev, iommu_set_ops(&arm_smmu_iommu_ops); + /* Set vIOMMU type to SMMUv3 */ + vsmmuv3_set_type(); + return 0; } diff --git a/xen/drivers/passthrough/arm/viommu.c b/xen/drivers/passthrough/arm/viommu.c index 7ab6061e34..53ae46349a 100644 --- a/xen/drivers/passthrough/arm/viommu.c +++ b/xen/drivers/passthrough/arm/viommu.c @@ -2,12 +2,42 @@ #include #include +#include #include #include +/* List of all host IOMMUs */ +LIST_HEAD(host_iommu_list); + const struct viommu_desc __read_mostly *cur_viommu; +/* Common function for adding to host_iommu_list */ +void add_to_host_iommu_list(paddr_t addr, paddr_t size, + const struct dt_device_node *node) +{ + struct host_iommu *iommu_data; + + iommu_data = xzalloc(struct host_iommu); + if ( !iommu_data ) + panic("vIOMMU: Cannot allocate memory for host IOMMU data\n"); + + iommu_data->addr = addr; + iommu_data->size = size; + iommu_data->dt_node = node; + iommu_data->irq = platform_get_irq(node, 0); + if ( iommu_data->irq < 0 ) + { + gdprintk(XENLOG_ERR, + "vIOMMU: Cannot find a valid IOMMU irq\n"); + return; + } + + printk("vIOMMU: Found IOMMU @0x%"PRIx64"\n", addr); + + list_add_tail(&iommu_data->entry, &host_iommu_list); +} + int domain_viommu_init(struct domain *d, uint16_t viommu_type) { if ( viommu_type == XEN_DOMCTL_CONFIG_VIOMMU_NONE ) diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c new file mode 100644 index 0000000000..6b4009e5ef --- /dev/null +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c @@ -0,0 +1,124 @@ +/* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */ + +#include +#include +#include +#include + +/* Struct to hold the vIOMMU ops and vIOMMU type */ +extern const struct viommu_desc __read_mostly *cur_viommu; + +struct virt_smmu { + struct domain *d; + struct list_head viommu_list; +}; + +static int vsmmuv3_mmio_write(struct vcpu *v, mmio_info_t *info, + register_t r, void *priv) +{ + return IO_HANDLED; +} + +static int vsmmuv3_mmio_read(struct vcpu *v, mmio_info_t *info, + register_t *r, void *priv) +{ + return IO_HANDLED; +} + +static const struct mmio_handler_ops vsmmuv3_mmio_handler = { + .read = vsmmuv3_mmio_read, + .write = vsmmuv3_mmio_write, +}; + +static int vsmmuv3_init_single(struct domain *d, paddr_t addr, paddr_t size) +{ + struct virt_smmu *smmu; + + smmu = xzalloc(struct virt_smmu); + if ( !smmu ) + return -ENOMEM; + + smmu->d = d; + + register_mmio_handler(d, &vsmmuv3_mmio_handler, addr, size, smmu); + + /* Register the vIOMMU to be able to clean it up later. */ + list_add_tail(&smmu->viommu_list, &d->arch.viommu_list); + + return 0; +} + +int domain_vsmmuv3_init(struct domain *d) +{ + int ret; + INIT_LIST_HEAD(&d->arch.viommu_list); + + if ( is_hardware_domain(d) ) + { + struct host_iommu *hw_iommu; + + list_for_each_entry(hw_iommu, &host_iommu_list, entry) + { + ret = vsmmuv3_init_single(d, hw_iommu->addr, hw_iommu->size); + if ( ret ) + return ret; + } + } + else + { + ret = vsmmuv3_init_single(d, GUEST_VSMMUV3_BASE, GUEST_VSMMUV3_SIZE); + if ( ret ) + return ret; + } + + return 0; +} + +int vsmmuv3_relinquish_resources(struct domain *d) +{ + struct virt_smmu *pos, *temp; + + /* Cope with unitialized vIOMMU */ + if ( list_head_is_null(&d->arch.viommu_list) ) + return 0; + + list_for_each_entry_safe(pos, temp, &d->arch.viommu_list, viommu_list ) + { + list_del(&pos->viommu_list); + xfree(pos); + } + + return 0; +} + +static const struct viommu_ops vsmmuv3_ops = { + .domain_init = domain_vsmmuv3_init, + .relinquish_resources = vsmmuv3_relinquish_resources, +}; + +static const struct viommu_desc vsmmuv3_desc = { + .ops = &vsmmuv3_ops, + .viommu_type = XEN_DOMCTL_CONFIG_VIOMMU_SMMUV3, +}; + +void __init vsmmuv3_set_type(void) +{ + const struct viommu_desc *desc = &vsmmuv3_desc; + + if ( cur_viommu && (cur_viommu != desc) ) + { + printk("WARNING: Cannot set vIOMMU, already set to a different value\n"); + return; + } + + cur_viommu = desc; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.h b/xen/drivers/passthrough/arm/vsmmu-v3.h new file mode 100644 index 0000000000..e11f85b431 --- /dev/null +++ b/xen/drivers/passthrough/arm/vsmmu-v3.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */ +#ifndef __ARCH_ARM_VSMMU_V3_H__ +#define __ARCH_ARM_VSMMU_V3_H__ + +#include + +#ifdef CONFIG_VIRTUAL_ARM_SMMU_V3 + +void vsmmuv3_set_type(void); + +#else + +static inline void vsmmuv3_set_type(void) +{ + return; +} + +#endif /* CONFIG_VIRTUAL_ARM_SMMU_V3 */ + +#endif /* __ARCH_ARM_VSMMU_V3_H__ */ diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index 33d32835e7..24b52fa017 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -297,7 +297,8 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t); #define XEN_DOMCTL_CONFIG_TEE_NONE 0 #define XEN_DOMCTL_CONFIG_TEE_OPTEE 1 -#define XEN_DOMCTL_CONFIG_VIOMMU_NONE 0 +#define XEN_DOMCTL_CONFIG_VIOMMU_NONE 0 +#define XEN_DOMCTL_CONFIG_VIOMMU_SMMUV3 1 struct xen_arch_domainconfig { /* IN/OUT */ @@ -418,6 +419,10 @@ typedef uint64_t xen_callback_t; #define GUEST_GICV3_GICR0_BASE xen_mk_ullong(0x03020000) /* vCPU0..127 */ #define GUEST_GICV3_GICR0_SIZE xen_mk_ullong(0x01000000) +/* vsmmuv3 ITS mappings */ +#define GUEST_VSMMUV3_BASE xen_mk_ullong(0x04040000) +#define GUEST_VSMMUV3_SIZE xen_mk_ullong(0x00040000) + /* * 256 MB is reserved for VPCI configuration space based on calculation * 256 buses x 32 devices x 8 functions x 4 KB = 256 MB From patchwork Thu Dec 1 16:02:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061531 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3362C47088 for ; Thu, 1 Dec 2022 16:07:17 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450942.708484 (Exim 4.92) (envelope-from ) id 1p0m5c-0003pd-M2; Thu, 01 Dec 2022 16:07:08 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450942.708484; Thu, 01 Dec 2022 16:07:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m5c-0003pW-IT; Thu, 01 Dec 2022 16:07:08 +0000 Received: by outflank-mailman (input) for mailman id 450942; Thu, 01 Dec 2022 16:07:07 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m5b-0003mC-Lv for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:07:07 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 3397a7d0-7192-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:07:06 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9EC0DED1; Thu, 1 Dec 2022 08:07:12 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 214B03F73B; Thu, 1 Dec 2022 08:07:05 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3397a7d0-7192-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Wei Liu , Anthony PERARD , George Dunlap , Nick Rosbrook , Juergen Gross Subject: [RFC PATCH 06/21] xen/domctl: Add XEN_DOMCTL_CONFIG_VIOMMU_* and viommu config param Date: Thu, 1 Dec 2022 16:02:30 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Add new viommu_type field and field values XEN_DOMCTL_CONFIG_VIOMMU_NONE XEN_DOMCTL_CONFIG_VIOMMU_SMMUV3 in xen_arch_domainconfig to enable/disable vIOMMU support for domains. Also add viommu="N" parameter to xl domain configuration to enable the vIOMMU for the domains. Currently, only the "smmuv3" type is supported for ARM. Signed-off-by: Rahul Singh --- docs/man/xl.cfg.5.pod.in | 11 +++++++++++ tools/golang/xenlight/helpers.gen.go | 2 ++ tools/golang/xenlight/types.gen.go | 1 + tools/include/libxl.h | 5 +++++ tools/libs/light/libxl_arm.c | 13 +++++++++++++ tools/libs/light/libxl_types.idl | 6 ++++++ tools/xl/xl_parse.c | 9 +++++++++ 7 files changed, 47 insertions(+) diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in index ec444fb2ba..5854d777ed 100644 --- a/docs/man/xl.cfg.5.pod.in +++ b/docs/man/xl.cfg.5.pod.in @@ -2870,6 +2870,17 @@ Currently, only the "sbsa_uart" model is supported for ARM. =back +=item B + +To enable viommu, user must specify the following option in the VM +config file: + +viommu = "smmuv3" + +Currently, only the "smmuv3" type is supported for ARM. + +=back + =head3 x86 =over 4 diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go index cb1bdf9bdf..8b6d771fc7 100644 --- a/tools/golang/xenlight/helpers.gen.go +++ b/tools/golang/xenlight/helpers.gen.go @@ -1117,6 +1117,7 @@ default: return fmt.Errorf("invalid union key '%v'", x.Type)} x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version) x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart) +x.ArchArm.Viommu = ViommuType(xc.arch_arm.viommu) if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil { return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err) } @@ -1602,6 +1603,7 @@ default: return fmt.Errorf("invalid union key '%v'", x.Type)} xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion) xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart) +xc.arch_arm.viommu = C.libxl_viommu_type(x.ArchArm.Viommu) if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil { return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err) } diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go index 871576fb0e..16c835ebeb 100644 --- a/tools/golang/xenlight/types.gen.go +++ b/tools/golang/xenlight/types.gen.go @@ -531,6 +531,7 @@ TypeUnion DomainBuildInfoTypeUnion ArchArm struct { GicVersion GicVersion Vuart VuartType +Viommu ViommuType } ArchX86 struct { MsrRelaxed Defbool diff --git a/tools/include/libxl.h b/tools/include/libxl.h index d652895075..49563f57bd 100644 --- a/tools/include/libxl.h +++ b/tools/include/libxl.h @@ -278,6 +278,11 @@ */ #define LIBXL_HAVE_BUILDINFO_ARCH_ARM_TEE 1 +/* + * libxl_domain_build_info has the arch_arm.viommu_type field. + */ +#define LIBXL_HAVE_BUILDINFO_ARM_VIOMMU 1 + /* * LIBXL_HAVE_SOFT_RESET indicates that libxl supports performing * 'soft reset' for domains and there is 'soft_reset' shutdown reason diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index cd84a7c66e..b8eff10a41 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -179,6 +179,19 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, return ERROR_FAIL; } + switch (d_config->b_info.arch_arm.viommu_type) { + case LIBXL_VIOMMU_TYPE_NONE: + config->arch.viommu_type = XEN_DOMCTL_CONFIG_VIOMMU_NONE; + break; + case LIBXL_VIOMMU_TYPE_SMMUV3: + config->arch.viommu_type = XEN_DOMCTL_CONFIG_VIOMMU_SMMUV3; + break; + default: + LOG(ERROR, "Unknown vIOMMU type %d", + d_config->b_info.arch_arm.viommu_type); + return ERROR_FAIL; + } + return 0; } diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl index 9e3d33cb5a..06ee5ac6ba 100644 --- a/tools/libs/light/libxl_types.idl +++ b/tools/libs/light/libxl_types.idl @@ -492,6 +492,11 @@ libxl_tee_type = Enumeration("tee_type", [ (1, "optee") ], init_val = "LIBXL_TEE_TYPE_NONE") +libxl_viommu_type = Enumeration("viommu_type", [ + (0, "none"), + (1, "smmuv3") + ], init_val = "LIBXL_VIOMMU_TYPE_NONE") + libxl_rdm_reserve = Struct("rdm_reserve", [ ("strategy", libxl_rdm_reserve_strategy), ("policy", libxl_rdm_reserve_policy), @@ -658,6 +663,7 @@ libxl_domain_build_info = Struct("domain_build_info",[ ("arch_arm", Struct(None, [("gic_version", libxl_gic_version), ("vuart", libxl_vuart_type), + ("viommu_type", libxl_viommu_type), ])), ("arch_x86", Struct(None, [("msr_relaxed", libxl_defbool), ])), diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c index 644ab8f8fd..ef6d8bb3f7 100644 --- a/tools/xl/xl_parse.c +++ b/tools/xl/xl_parse.c @@ -2751,6 +2751,15 @@ skip_usbdev: } } + if (!xlu_cfg_get_string (config, "viommu", &buf, 1)) { + e = libxl_viommu_type_from_string(buf, &b_info->arch_arm.viommu_type); + if (e) { + fprintf(stderr, + "Unknown vIOMMU type \"%s\" specified\n", buf); + exit(-ERROR_FAIL); + } + } + parse_vkb_list(config, d_config); xlu_cfg_get_defbool(config, "xend_suspend_evtchn_compat", From patchwork Thu Dec 1 16:02:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061532 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id F3A8FC4321E for ; Thu, 1 Dec 2022 16:08:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450949.708495 (Exim 4.92) (envelope-from ) id 1p0m6p-0004U8-VF; Thu, 01 Dec 2022 16:08:23 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450949.708495; Thu, 01 Dec 2022 16:08:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m6p-0004U1-SS; Thu, 01 Dec 2022 16:08:23 +0000 Received: by outflank-mailman (input) for mailman id 450949; Thu, 01 Dec 2022 16:08:23 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m6p-0004Tp-6I for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:08:23 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 607465c4-7192-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:08:21 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BEFD6ED1; Thu, 1 Dec 2022 08:08:27 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E56093F73B; Thu, 1 Dec 2022 08:08:19 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 607465c4-7192-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 07/21] xen/arm: vIOMMU: Add cmdline boot option "viommu = " Date: Thu, 1 Dec 2022 16:02:31 +0000 Message-Id: <11ad0192b1dfe5f90bc980a09894eb6ff7c5ba1f.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Add cmdline boot option "viommu = " to enable or disable the virtual iommu support for guests on ARM. Signed-off-by: Rahul Singh --- docs/misc/xen-command-line.pandoc | 7 +++++++ xen/arch/arm/include/asm/viommu.h | 11 +++++++++++ xen/drivers/passthrough/arm/viommu.c | 9 +++++++++ xen/drivers/passthrough/arm/vsmmu-v3.c | 3 +++ 4 files changed, 30 insertions(+) diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc index 424b12cfb2..14a104f2b9 100644 --- a/docs/misc/xen-command-line.pandoc +++ b/docs/misc/xen-command-line.pandoc @@ -1896,6 +1896,13 @@ This option can be specified more than once (up to 8 times at present). Flag to enable or disable support for PCI passthrough +### viommu (arm) +> `= ` + +> Default: `false` + +Flag to enable or disable support for Virtual IOMMU for guests. + ### pcid (x86) > `= | xpti=` diff --git a/xen/arch/arm/include/asm/viommu.h b/xen/arch/arm/include/asm/viommu.h index 4785877e2a..4de4cceeda 100644 --- a/xen/arch/arm/include/asm/viommu.h +++ b/xen/arch/arm/include/asm/viommu.h @@ -10,6 +10,7 @@ #include extern struct list_head host_iommu_list; +extern bool viommu_enabled; /* data structure for each hardware IOMMU */ struct host_iommu { @@ -50,6 +51,11 @@ uint16_t viommu_get_type(void); void add_to_host_iommu_list(paddr_t addr, paddr_t size, const struct dt_device_node *node); +static always_inline bool is_viommu_enabled(void) +{ + return viommu_enabled; +} + #else static inline uint8_t viommu_get_type(void) @@ -76,6 +82,11 @@ static inline void add_to_host_iommu_list(paddr_t addr, paddr_t size, return; } +static always_inline bool is_viommu_enabled(void) +{ + return false; +} + #endif /* CONFIG_VIRTUAL_IOMMU */ #endif /* __ARCH_ARM_VIOMMU_H__ */ diff --git a/xen/drivers/passthrough/arm/viommu.c b/xen/drivers/passthrough/arm/viommu.c index 53ae46349a..a1d6a04ba9 100644 --- a/xen/drivers/passthrough/arm/viommu.c +++ b/xen/drivers/passthrough/arm/viommu.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include @@ -38,8 +39,16 @@ void add_to_host_iommu_list(paddr_t addr, paddr_t size, list_add_tail(&iommu_data->entry, &host_iommu_list); } +/* By default viommu is disabled. */ +bool __read_mostly viommu_enabled; +boolean_param("viommu", viommu_enabled); + int domain_viommu_init(struct domain *d, uint16_t viommu_type) { + /* Enable viommu when it has been enabled explicitly (viommu=on). */ + if ( !viommu_enabled ) + return 0; + if ( viommu_type == XEN_DOMCTL_CONFIG_VIOMMU_NONE ) return 0; diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c index 6b4009e5ef..e36f200ba5 100644 --- a/xen/drivers/passthrough/arm/vsmmu-v3.c +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c @@ -105,6 +105,9 @@ void __init vsmmuv3_set_type(void) { const struct viommu_desc *desc = &vsmmuv3_desc; + if ( !is_viommu_enabled() ) + return; + if ( cur_viommu && (cur_viommu != desc) ) { printk("WARNING: Cannot set vIOMMU, already set to a different value\n"); From patchwork Thu Dec 1 16:02:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061533 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA71FC43217 for ; Thu, 1 Dec 2022 16:09:15 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450956.708506 (Exim 4.92) (envelope-from ) id 1p0m7V-00051I-79; Thu, 01 Dec 2022 16:09:05 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450956.708506; Thu, 01 Dec 2022 16:09:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m7V-00051B-4C; Thu, 01 Dec 2022 16:09:05 +0000 Received: by outflank-mailman (input) for mailman id 450956; Thu, 01 Dec 2022 16:09:04 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m7U-0004Tp-81 for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:09:04 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 78fbfb44-7192-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:09:03 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 157F0ED1; Thu, 1 Dec 2022 08:09:09 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A686B3F73B; Thu, 1 Dec 2022 08:09:01 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 78fbfb44-7192-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 08/21] xen/arm: vsmmuv3: Add support for registers emulation Date: Thu, 1 Dec 2022 16:02:32 +0000 Message-Id: <89018b50b5b0c2c4a406c5a8779b7fd33d59d1e4.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Add initial support for various emulated registers for virtual SMMUv3 for guests and also add support for virtual cmdq and eventq. Signed-off-by: Rahul Singh --- xen/drivers/passthrough/arm/vsmmu-v3.c | 281 +++++++++++++++++++++++++ 1 file changed, 281 insertions(+) diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c index e36f200ba5..c3f99657e6 100644 --- a/xen/drivers/passthrough/arm/vsmmu-v3.c +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c @@ -3,25 +3,302 @@ #include #include #include +#include #include +#include + +#include "smmu-v3.h" + +/* Register Definition */ +#define ARM_SMMU_IDR2 0x8 +#define ARM_SMMU_IDR3 0xc +#define ARM_SMMU_IDR4 0x10 +#define IDR0_TERM_MODEL (1 << 26) +#define IDR3_RIL (1 << 10) +#define CR0_RESERVED 0xFFFFFC20 +#define SMMU_IDR1_SIDSIZE 16 +#define SMMU_CMDQS 19 +#define SMMU_EVTQS 19 +#define DWORDS_BYTES 8 /* Struct to hold the vIOMMU ops and vIOMMU type */ extern const struct viommu_desc __read_mostly *cur_viommu; +/* virtual smmu queue */ +struct arm_vsmmu_queue { + uint64_t q_base; /* base register */ + uint32_t prod; + uint32_t cons; + uint8_t ent_size; + uint8_t max_n_shift; +}; + struct virt_smmu { struct domain *d; struct list_head viommu_list; + uint8_t sid_split; + uint32_t features; + uint32_t cr[3]; + uint32_t cr0ack; + uint32_t gerror; + uint32_t gerrorn; + uint32_t strtab_base_cfg; + uint64_t strtab_base; + uint32_t irq_ctrl; + uint64_t gerror_irq_cfg0; + uint64_t evtq_irq_cfg0; + struct arm_vsmmu_queue evtq, cmdq; }; static int vsmmuv3_mmio_write(struct vcpu *v, mmio_info_t *info, register_t r, void *priv) { + struct virt_smmu *smmu = priv; + uint64_t reg; + uint32_t reg32; + + switch ( info->gpa & 0xffff ) + { + case VREG32(ARM_SMMU_CR0): + reg32 = smmu->cr[0]; + vreg_reg32_update(®32, r, info); + smmu->cr[0] = reg32; + smmu->cr0ack = reg32 & ~CR0_RESERVED; + break; + + case VREG32(ARM_SMMU_CR1): + reg32 = smmu->cr[1]; + vreg_reg32_update(®32, r, info); + smmu->cr[1] = reg32; + break; + + case VREG32(ARM_SMMU_CR2): + reg32 = smmu->cr[2]; + vreg_reg32_update(®32, r, info); + smmu->cr[2] = reg32; + break; + + case VREG64(ARM_SMMU_STRTAB_BASE): + reg = smmu->strtab_base; + vreg_reg64_update(®, r, info); + smmu->strtab_base = reg; + break; + + case VREG32(ARM_SMMU_STRTAB_BASE_CFG): + reg32 = smmu->strtab_base_cfg; + vreg_reg32_update(®32, r, info); + smmu->strtab_base_cfg = reg32; + + smmu->sid_split = FIELD_GET(STRTAB_BASE_CFG_SPLIT, reg32); + smmu->features |= STRTAB_BASE_CFG_FMT_2LVL; + break; + + case VREG32(ARM_SMMU_CMDQ_BASE): + reg = smmu->cmdq.q_base; + vreg_reg64_update(®, r, info); + smmu->cmdq.q_base = reg; + smmu->cmdq.max_n_shift = FIELD_GET(Q_BASE_LOG2SIZE, smmu->cmdq.q_base); + if ( smmu->cmdq.max_n_shift > SMMU_CMDQS ) + smmu->cmdq.max_n_shift = SMMU_CMDQS; + break; + + case VREG32(ARM_SMMU_CMDQ_PROD): + reg32 = smmu->cmdq.prod; + vreg_reg32_update(®32, r, info); + smmu->cmdq.prod = reg32; + break; + + case VREG32(ARM_SMMU_CMDQ_CONS): + reg32 = smmu->cmdq.cons; + vreg_reg32_update(®32, r, info); + smmu->cmdq.cons = reg32; + break; + + case VREG32(ARM_SMMU_EVTQ_BASE): + reg = smmu->evtq.q_base; + vreg_reg64_update(®, r, info); + smmu->evtq.q_base = reg; + smmu->evtq.max_n_shift = FIELD_GET(Q_BASE_LOG2SIZE, smmu->evtq.q_base); + if ( smmu->cmdq.max_n_shift > SMMU_EVTQS ) + smmu->cmdq.max_n_shift = SMMU_EVTQS; + break; + + case VREG32(ARM_SMMU_EVTQ_PROD): + reg32 = smmu->evtq.prod; + vreg_reg32_update(®32, r, info); + smmu->evtq.prod = reg32; + break; + + case VREG32(ARM_SMMU_EVTQ_CONS): + reg32 = smmu->evtq.cons; + vreg_reg32_update(®32, r, info); + smmu->evtq.cons = reg32; + break; + + case VREG32(ARM_SMMU_IRQ_CTRL): + reg32 = smmu->irq_ctrl; + vreg_reg32_update(®32, r, info); + smmu->irq_ctrl = reg32; + break; + + case VREG64(ARM_SMMU_GERROR_IRQ_CFG0): + reg = smmu->gerror_irq_cfg0; + vreg_reg64_update(®, r, info); + smmu->gerror_irq_cfg0 = reg; + break; + + case VREG64(ARM_SMMU_EVTQ_IRQ_CFG0): + reg = smmu->evtq_irq_cfg0; + vreg_reg64_update(®, r, info); + smmu->evtq_irq_cfg0 = reg; + break; + + case VREG32(ARM_SMMU_GERRORN): + reg = smmu->gerrorn; + vreg_reg64_update(®, r, info); + smmu->gerrorn = reg; + break; + + default: + printk(XENLOG_G_ERR + "%pv: vSMMUv3: unhandled write r%d offset %"PRIpaddr"\n", + v, info->dabt.reg, (unsigned long)info->gpa & 0xffff); + return IO_ABORT; + } + return IO_HANDLED; } static int vsmmuv3_mmio_read(struct vcpu *v, mmio_info_t *info, register_t *r, void *priv) { + struct virt_smmu *smmu = priv; + uint64_t reg; + + switch ( info->gpa & 0xffff ) + { + case VREG32(ARM_SMMU_IDR0): + reg = FIELD_PREP(IDR0_S1P, 1) | FIELD_PREP(IDR0_TTF, 2) | + FIELD_PREP(IDR0_COHACC, 0) | FIELD_PREP(IDR0_ASID16, 1) | + FIELD_PREP(IDR0_TTENDIAN, 0) | FIELD_PREP(IDR0_STALL_MODEL, 1) | + FIELD_PREP(IDR0_ST_LVL, 1) | FIELD_PREP(IDR0_TERM_MODEL, 1); + *r = vreg_reg32_extract(reg, info); + break; + + case VREG32(ARM_SMMU_IDR1): + reg = FIELD_PREP(IDR1_SIDSIZE, SMMU_IDR1_SIDSIZE) | + FIELD_PREP(IDR1_CMDQS, SMMU_CMDQS) | + FIELD_PREP(IDR1_EVTQS, SMMU_EVTQS); + *r = vreg_reg32_extract(reg, info); + break; + + case VREG32(ARM_SMMU_IDR2): + goto read_reserved; + + case VREG32(ARM_SMMU_IDR3): + reg = FIELD_PREP(IDR3_RIL, 0); + *r = vreg_reg32_extract(reg, info); + break; + + case VREG32(ARM_SMMU_IDR4): + goto read_impl_defined; + + case VREG32(ARM_SMMU_IDR5): + reg = FIELD_PREP(IDR5_GRAN4K, 1) | FIELD_PREP(IDR5_GRAN16K, 1) | + FIELD_PREP(IDR5_GRAN64K, 1) | FIELD_PREP(IDR5_OAS, IDR5_OAS_48_BIT); + *r = vreg_reg32_extract(reg, info); + break; + + case VREG32(ARM_SMMU_CR0): + *r = vreg_reg32_extract(smmu->cr[0], info); + break; + + case VREG32(ARM_SMMU_CR0ACK): + *r = vreg_reg32_extract(smmu->cr0ack, info); + break; + + case VREG32(ARM_SMMU_CR1): + *r = vreg_reg32_extract(smmu->cr[1], info); + break; + + case VREG32(ARM_SMMU_CR2): + *r = vreg_reg32_extract(smmu->cr[2], info); + break; + + case VREG32(ARM_SMMU_STRTAB_BASE): + *r = vreg_reg64_extract(smmu->strtab_base, info); + break; + + case VREG32(ARM_SMMU_STRTAB_BASE_CFG): + *r = vreg_reg32_extract(smmu->strtab_base_cfg, info); + break; + + case VREG32(ARM_SMMU_CMDQ_BASE): + *r = vreg_reg64_extract(smmu->cmdq.q_base, info); + break; + + case VREG32(ARM_SMMU_CMDQ_PROD): + *r = vreg_reg32_extract(smmu->cmdq.prod, info); + break; + + case VREG32(ARM_SMMU_CMDQ_CONS): + *r = vreg_reg32_extract(smmu->cmdq.cons, info); + break; + + case VREG32(ARM_SMMU_EVTQ_BASE): + *r = vreg_reg64_extract(smmu->evtq.q_base, info); + break; + + case VREG32(ARM_SMMU_EVTQ_PROD): + *r = vreg_reg32_extract(smmu->evtq.prod, info); + break; + + case VREG32(ARM_SMMU_EVTQ_CONS): + *r = vreg_reg32_extract(smmu->evtq.cons, info); + break; + + case VREG32(ARM_SMMU_IRQ_CTRL): + case VREG32(ARM_SMMU_IRQ_CTRLACK): + *r = vreg_reg32_extract(smmu->irq_ctrl, info); + break; + + case VREG64(ARM_SMMU_GERROR_IRQ_CFG0): + *r = vreg_reg64_extract(smmu->gerror_irq_cfg0, info); + break; + + case VREG64(ARM_SMMU_EVTQ_IRQ_CFG0): + *r = vreg_reg64_extract(smmu->evtq_irq_cfg0, info); + break; + + case VREG32(ARM_SMMU_GERROR): + *r = vreg_reg64_extract(smmu->gerror, info); + break; + + case VREG32(ARM_SMMU_GERRORN): + *r = vreg_reg64_extract(smmu->gerrorn, info); + break; + + default: + printk(XENLOG_G_ERR + "%pv: vSMMUv3: unhandled read r%d offset %"PRIpaddr"\n", + v, info->dabt.reg, (unsigned long)info->gpa & 0xffff); + return IO_ABORT; + } + + return IO_HANDLED; + + read_impl_defined: + printk(XENLOG_G_DEBUG + "%pv: vSMMUv3: RAZ on implementation defined register offset %"PRIpaddr"\n", + v, info->gpa & 0xffff); + *r = 0; + return IO_HANDLED; + + read_reserved: + printk(XENLOG_G_DEBUG + "%pv: vSMMUv3: RAZ on reserved register offset %"PRIpaddr"\n", + v, info->gpa & 0xffff); + *r = 0; return IO_HANDLED; } @@ -39,6 +316,10 @@ static int vsmmuv3_init_single(struct domain *d, paddr_t addr, paddr_t size) return -ENOMEM; smmu->d = d; + smmu->cmdq.q_base = FIELD_PREP(Q_BASE_LOG2SIZE, SMMU_CMDQS); + smmu->cmdq.ent_size = CMDQ_ENT_DWORDS * DWORDS_BYTES; + smmu->evtq.q_base = FIELD_PREP(Q_BASE_LOG2SIZE, SMMU_EVTQS); + smmu->evtq.ent_size = EVTQ_ENT_DWORDS * DWORDS_BYTES; register_mmio_handler(d, &vsmmuv3_mmio_handler, addr, size, smmu); From patchwork Thu Dec 1 16:02:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061534 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 74A10C43217 for ; Thu, 1 Dec 2022 16:09:56 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450966.708516 (Exim 4.92) (envelope-from ) id 1p0m8A-0005fb-J9; Thu, 01 Dec 2022 16:09:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450966.708516; Thu, 01 Dec 2022 16:09:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m8A-0005fU-Fz; Thu, 01 Dec 2022 16:09:46 +0000 Received: by outflank-mailman (input) for mailman id 450966; Thu, 01 Dec 2022 16:09:44 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m88-0005Wl-ID for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:09:44 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 912c8996-7192-11ed-8fd2-01056ac49cbb; Thu, 01 Dec 2022 17:09:43 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A5B3DED1; Thu, 1 Dec 2022 08:09:49 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 433683F73B; Thu, 1 Dec 2022 08:09:42 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 912c8996-7192-11ed-8fd2-01056ac49cbb From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 09/21] xen/arm: vsmmuv3: Add support for cmdqueue handling Date: Thu, 1 Dec 2022 16:02:33 +0000 Message-Id: <6976a8484515fe02e9c2bd65cfb6a93632a228eb.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Add support for virtual cmdqueue handling for guests Signed-off-by: Rahul Singh --- xen/drivers/passthrough/arm/vsmmu-v3.c | 101 +++++++++++++++++++++++++ 1 file changed, 101 insertions(+) diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c index c3f99657e6..cc651a2dc8 100644 --- a/xen/drivers/passthrough/arm/vsmmu-v3.c +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: (GPL-2.0-or-later OR BSD-2-Clause) */ +#include #include #include #include @@ -24,6 +25,26 @@ /* Struct to hold the vIOMMU ops and vIOMMU type */ extern const struct viommu_desc __read_mostly *cur_viommu; +/* SMMUv3 command definitions */ +#define CMDQ_OP_PREFETCH_CFG 0x1 +#define CMDQ_OP_CFGI_STE 0x3 +#define CMDQ_OP_CFGI_ALL 0x4 +#define CMDQ_OP_CFGI_CD 0x5 +#define CMDQ_OP_CFGI_CD_ALL 0x6 +#define CMDQ_OP_TLBI_NH_ASID 0x11 +#define CMDQ_OP_TLBI_NH_VA 0x12 +#define CMDQ_OP_TLBI_NSNH_ALL 0x30 +#define CMDQ_OP_CMD_SYNC 0x46 + +/* Queue Handling */ +#define Q_BASE(q) ((q)->q_base & Q_BASE_ADDR_MASK) +#define Q_CONS_ENT(q) (Q_BASE(q) + Q_IDX(q, (q)->cons) * (q)->ent_size) +#define Q_PROD_ENT(q) (Q_BASE(q) + Q_IDX(q, (q)->prod) * (q)->ent_size) + +/* Helper Macros */ +#define smmu_get_cmdq_enabled(x) FIELD_GET(CR0_CMDQEN, x) +#define smmu_cmd_get_command(x) FIELD_GET(CMDQ_0_OP, x) + /* virtual smmu queue */ struct arm_vsmmu_queue { uint64_t q_base; /* base register */ @@ -48,8 +69,80 @@ struct virt_smmu { uint64_t gerror_irq_cfg0; uint64_t evtq_irq_cfg0; struct arm_vsmmu_queue evtq, cmdq; + spinlock_t cmd_queue_lock; }; +/* Queue manipulation functions */ +static bool queue_empty(struct arm_vsmmu_queue *q) +{ + return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) && + Q_WRP(q, q->prod) == Q_WRP(q, q->cons); +} + +static void queue_inc_cons(struct arm_vsmmu_queue *q) +{ + uint32_t cons = (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1; + q->cons = Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons); +} + +static void dump_smmu_command(uint64_t *command) +{ + gdprintk(XENLOG_ERR, "cmd 0x%02llx: %016lx %016lx\n", + smmu_cmd_get_command(command[0]), command[0], command[1]); +} +static int arm_vsmmu_handle_cmds(struct virt_smmu *smmu) +{ + struct arm_vsmmu_queue *q = &smmu->cmdq; + struct domain *d = smmu->d; + uint64_t command[CMDQ_ENT_DWORDS]; + paddr_t addr; + + if ( !smmu_get_cmdq_enabled(smmu->cr[0]) ) + return 0; + + while ( !queue_empty(q) ) + { + int ret; + + addr = Q_CONS_ENT(q); + ret = access_guest_memory_by_ipa(d, addr, command, + sizeof(command), false); + if ( ret ) + return ret; + + switch ( smmu_cmd_get_command(command[0]) ) + { + case CMDQ_OP_CFGI_STE: + break; + case CMDQ_OP_PREFETCH_CFG: + case CMDQ_OP_CFGI_CD: + case CMDQ_OP_CFGI_CD_ALL: + case CMDQ_OP_CFGI_ALL: + case CMDQ_OP_CMD_SYNC: + break; + case CMDQ_OP_TLBI_NH_ASID: + case CMDQ_OP_TLBI_NSNH_ALL: + case CMDQ_OP_TLBI_NH_VA: + if ( !iommu_iotlb_flush_all(smmu->d, 1) ) + break; + default: + gdprintk(XENLOG_ERR, "vSMMUv3: unhandled command\n"); + dump_smmu_command(command); + break; + } + + if ( ret ) + { + gdprintk(XENLOG_ERR, + "vSMMUv3: command error %d while handling command\n", + ret); + dump_smmu_command(command); + } + queue_inc_cons(q); + } + return 0; +} + static int vsmmuv3_mmio_write(struct vcpu *v, mmio_info_t *info, register_t r, void *priv) { @@ -103,9 +196,15 @@ static int vsmmuv3_mmio_write(struct vcpu *v, mmio_info_t *info, break; case VREG32(ARM_SMMU_CMDQ_PROD): + spin_lock(&smmu->cmd_queue_lock); reg32 = smmu->cmdq.prod; vreg_reg32_update(®32, r, info); smmu->cmdq.prod = reg32; + + if ( arm_vsmmu_handle_cmds(smmu) ) + gdprintk(XENLOG_ERR, "error handling vSMMUv3 commands\n"); + + spin_unlock(&smmu->cmd_queue_lock); break; case VREG32(ARM_SMMU_CMDQ_CONS): @@ -321,6 +420,8 @@ static int vsmmuv3_init_single(struct domain *d, paddr_t addr, paddr_t size) smmu->evtq.q_base = FIELD_PREP(Q_BASE_LOG2SIZE, SMMU_EVTQS); smmu->evtq.ent_size = EVTQ_ENT_DWORDS * DWORDS_BYTES; + spin_lock_init(&smmu->cmd_queue_lock); + register_mmio_handler(d, &vsmmuv3_mmio_handler, addr, size, smmu); /* Register the vIOMMU to be able to clean it up later. */ From patchwork Thu Dec 1 16:02:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A17EC4321E for ; Thu, 1 Dec 2022 16:11:16 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450973.708527 (Exim 4.92) (envelope-from ) id 1p0m9Q-00076O-Sz; Thu, 01 Dec 2022 16:11:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450973.708527; Thu, 01 Dec 2022 16:11:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m9Q-00076H-Pv; Thu, 01 Dec 2022 16:11:04 +0000 Received: by outflank-mailman (input) for mailman id 450973; Thu, 01 Dec 2022 16:11:03 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0m9O-00075z-Vj for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:11:02 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id bfb615b2-7192-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:11:01 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AC905ED1; Thu, 1 Dec 2022 08:11:07 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 49E053F73B; Thu, 1 Dec 2022 08:11:00 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bfb615b2-7192-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 10/21] xen/arm: vsmmuv3: Add support for command CMD_CFGI_STE Date: Thu, 1 Dec 2022 16:02:34 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 CMD_CFGI_STE is used to invalidate/validate the STE. Emulated vSMMUv3 driver in XEN will read the STE from the guest memory space and capture the Stage-1 configuration required to support nested translation. Signed-off-by: Rahul Singh --- xen/drivers/passthrough/arm/vsmmu-v3.c | 148 +++++++++++++++++++++++++ 1 file changed, 148 insertions(+) diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c index cc651a2dc8..916b97b8a2 100644 --- a/xen/drivers/passthrough/arm/vsmmu-v3.c +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c @@ -44,6 +44,21 @@ extern const struct viommu_desc __read_mostly *cur_viommu; /* Helper Macros */ #define smmu_get_cmdq_enabled(x) FIELD_GET(CR0_CMDQEN, x) #define smmu_cmd_get_command(x) FIELD_GET(CMDQ_0_OP, x) +#define smmu_cmd_get_sid(x) FIELD_GET(CMDQ_PREFETCH_0_SID, x) +#define smmu_get_ste_s1cdmax(x) FIELD_GET(STRTAB_STE_0_S1CDMAX, x) +#define smmu_get_ste_s1fmt(x) FIELD_GET(STRTAB_STE_0_S1FMT, x) +#define smmu_get_ste_s1stalld(x) FIELD_GET(STRTAB_STE_1_S1STALLD, x) +#define smmu_get_ste_s1ctxptr(x) FIELD_PREP(STRTAB_STE_0_S1CTXPTR_MASK, \ + FIELD_GET(STRTAB_STE_0_S1CTXPTR_MASK, x)) + +/* stage-1 translation configuration */ +struct arm_vsmmu_s1_trans_cfg { + paddr_t s1ctxptr; + uint8_t s1fmt; + uint8_t s1cdmax; + bool bypassed; /* translation is bypassed */ + bool aborted; /* translation is aborted */ +}; /* virtual smmu queue */ struct arm_vsmmu_queue { @@ -90,6 +105,138 @@ static void dump_smmu_command(uint64_t *command) gdprintk(XENLOG_ERR, "cmd 0x%02llx: %016lx %016lx\n", smmu_cmd_get_command(command[0]), command[0], command[1]); } +static int arm_vsmmu_find_ste(struct virt_smmu *smmu, uint32_t sid, + uint64_t *ste) +{ + paddr_t addr, strtab_base; + struct domain *d = smmu->d; + uint32_t log2size; + int strtab_size_shift; + int ret; + + log2size = FIELD_GET(STRTAB_BASE_CFG_LOG2SIZE, smmu->strtab_base_cfg); + + if ( sid >= (1 << MIN(log2size, SMMU_IDR1_SIDSIZE)) ) + return -EINVAL; + + if ( smmu->features & STRTAB_BASE_CFG_FMT_2LVL ) + { + int idx, max_l2_ste, span; + paddr_t l1ptr, l2ptr; + uint64_t l1std; + + strtab_size_shift = MAX(5, (int)log2size - smmu->sid_split - 1 + 3); + strtab_base = smmu->strtab_base & STRTAB_BASE_ADDR_MASK & + ~GENMASK_ULL(strtab_size_shift, 0); + idx = (sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS; + l1ptr = (paddr_t)(strtab_base + idx * sizeof(l1std)); + + ret = access_guest_memory_by_ipa(d, l1ptr, &l1std, + sizeof(l1std), false); + if ( ret ) + { + gdprintk(XENLOG_ERR, + "Could not read L1PTR at 0X%"PRIx64"\n", l1ptr); + return ret; + } + + span = FIELD_GET(STRTAB_L1_DESC_SPAN, l1std); + if ( !span ) + { + gdprintk(XENLOG_ERR, "Bad StreamID span\n"); + return -EINVAL; + } + + max_l2_ste = (1 << span) - 1; + l2ptr = FIELD_PREP(STRTAB_L1_DESC_L2PTR_MASK, + FIELD_GET(STRTAB_L1_DESC_L2PTR_MASK, l1std)); + idx = sid & ((1 << smmu->sid_split) - 1); + if ( idx > max_l2_ste ) + { + gdprintk(XENLOG_ERR, "idx=%d > max_l2_ste=%d\n", + idx, max_l2_ste); + return -EINVAL; + } + addr = l2ptr + idx * sizeof(*ste) * STRTAB_STE_DWORDS; + } + else + { + strtab_size_shift = log2size + 5; + strtab_base = smmu->strtab_base & STRTAB_BASE_ADDR_MASK & + ~GENMASK_ULL(strtab_size_shift, 0); + addr = strtab_base + sid * sizeof(*ste) * STRTAB_STE_DWORDS; + } + ret = access_guest_memory_by_ipa(d, addr, ste, sizeof(*ste), false); + if ( ret ) + { + gdprintk(XENLOG_ERR, + "Cannot fetch pte at address=0x%"PRIx64"\n", addr); + return -EINVAL; + } + + return 0; +} + +static int arm_vsmmu_decode_ste(struct virt_smmu *smmu, uint32_t sid, + struct arm_vsmmu_s1_trans_cfg *cfg, + uint64_t *ste) +{ + uint64_t val = ste[0]; + + if ( !(val & STRTAB_STE_0_V) ) + return -EAGAIN; + + switch ( FIELD_GET(STRTAB_STE_0_CFG, val) ) + { + case STRTAB_STE_0_CFG_BYPASS: + cfg->bypassed = true; + return 0; + case STRTAB_STE_0_CFG_ABORT: + cfg->aborted = true; + return 0; + case STRTAB_STE_0_CFG_S1_TRANS: + break; + case STRTAB_STE_0_CFG_S2_TRANS: + gdprintk(XENLOG_ERR, "vSMMUv3 does not support stage 2 yet\n"); + goto bad_ste; + default: + BUG(); /* STE corruption */ + } + + cfg->s1ctxptr = smmu_get_ste_s1ctxptr(val); + cfg->s1fmt = smmu_get_ste_s1fmt(val); + cfg->s1cdmax = smmu_get_ste_s1cdmax(val); + if ( cfg->s1cdmax != 0 ) + { + gdprintk(XENLOG_ERR, + "vSMMUv3 does not support multiple context descriptors\n"); + goto bad_ste; + } + + return 0; + +bad_ste: + return -EINVAL; +} + +static int arm_vsmmu_handle_cfgi_ste(struct virt_smmu *smmu, uint64_t *cmdptr) +{ + int ret; + uint64_t ste[STRTAB_STE_DWORDS]; + struct arm_vsmmu_s1_trans_cfg s1_cfg = {0}; + uint32_t sid = smmu_cmd_get_sid(cmdptr[0]); + + ret = arm_vsmmu_find_ste(smmu, sid, ste); + if ( ret ) + return ret; + + ret = arm_vsmmu_decode_ste(smmu, sid, &s1_cfg, ste); + if ( ret ) + return (ret == -EAGAIN ) ? 0 : ret; + + return 0; +} + static int arm_vsmmu_handle_cmds(struct virt_smmu *smmu) { struct arm_vsmmu_queue *q = &smmu->cmdq; @@ -113,6 +260,7 @@ static int arm_vsmmu_handle_cmds(struct virt_smmu *smmu) switch ( smmu_cmd_get_command(command[0]) ) { case CMDQ_OP_CFGI_STE: + ret = arm_vsmmu_handle_cfgi_ste(smmu, command); break; case CMDQ_OP_PREFETCH_CFG: case CMDQ_OP_CFGI_CD: From patchwork Thu Dec 1 16:02:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061544 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0F86FC47088 for ; Thu, 1 Dec 2022 16:12:36 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450989.708549 (Exim 4.92) (envelope-from ) id 1p0mAn-0008VX-Dk; Thu, 01 Dec 2022 16:12:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450989.708549; Thu, 01 Dec 2022 16:12:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mAn-0008VN-B1; Thu, 01 Dec 2022 16:12:29 +0000 Received: by outflank-mailman (input) for mailman id 450989; Thu, 01 Dec 2022 16:12:28 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mAm-00075z-A7 for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:12:28 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id f2b7bad0-7192-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:12:27 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4304DD6E; Thu, 1 Dec 2022 08:12:33 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8469C3F73B; Thu, 1 Dec 2022 08:12:25 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f2b7bad0-7192-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Bertrand Marquis , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Jan Beulich , Paul Durrant , =?utf-8?q?Rog?= =?utf-8?q?er_Pau_Monn=C3=A9?= Subject: [RFC PATCH 11/21] xen/arm: vsmmuv3: Attach Stage-1 configuration to SMMUv3 hardware Date: Thu, 1 Dec 2022 16:02:35 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Attach the Stage-1 configuration to device STE to support nested translation for the guests. Signed-off-by: Rahul Singh --- xen/drivers/passthrough/arm/smmu-v3.c | 79 ++++++++++++++++++++++++++ xen/drivers/passthrough/arm/smmu-v3.h | 1 + xen/drivers/passthrough/arm/vsmmu-v3.c | 18 ++++++ xen/include/xen/iommu.h | 14 +++++ 4 files changed, 112 insertions(+) diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c index 4f96fdb92f..c4b4a5d86d 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.c +++ b/xen/drivers/passthrough/arm/smmu-v3.c @@ -2754,6 +2754,37 @@ static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev) return NULL; } +static struct iommu_domain *arm_smmu_get_domain_by_sid(struct domain *d, + u32 sid) +{ + int i; + unsigned long flags; + struct iommu_domain *io_domain; + struct arm_smmu_domain *smmu_domain; + struct arm_smmu_master *master; + struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv; + + /* + * Loop through the &xen_domain->contexts to locate a context + * assigned to this SMMU + */ + list_for_each_entry(io_domain, &xen_domain->contexts, list) { + smmu_domain = to_smmu_domain(io_domain); + + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) { + for (i = 0; i < master->num_streams; i++) { + if (sid != master->streams[i].id) + continue; + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + return io_domain; + } + } + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + } + return NULL; +} + static struct iommu_domain *arm_smmu_get_domain(struct domain *d, struct device *dev) { @@ -2909,6 +2940,53 @@ static void arm_smmu_iommu_xen_domain_teardown(struct domain *d) xfree(xen_domain); } +static int arm_smmu_attach_guest_config(struct domain *d, u32 sid, + struct iommu_guest_config *cfg) +{ + int ret = -EINVAL; + unsigned long flags; + struct arm_smmu_master *master; + struct arm_smmu_domain *smmu_domain; + struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv; + struct iommu_domain *io_domain = arm_smmu_get_domain_by_sid(d, sid); + + if (!io_domain) + return -ENODEV; + + smmu_domain = to_smmu_domain(io_domain); + + spin_lock(&xen_domain->lock); + + switch (cfg->config) { + case ARM_SMMU_DOMAIN_ABORT: + smmu_domain->abort = true; + break; + case ARM_SMMU_DOMAIN_BYPASS: + smmu_domain->abort = false; + break; + case ARM_SMMU_DOMAIN_NESTED: + /* Enable Nested stage translation. */ + smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED; + smmu_domain->s1_cfg.s1ctxptr = cfg->s1ctxptr; + smmu_domain->s1_cfg.s1fmt = cfg->s1fmt; + smmu_domain->s1_cfg.s1cdmax = cfg->s1cdmax; + smmu_domain->abort = false; + break; + default: + goto out; + } + + spin_lock_irqsave(&smmu_domain->devices_lock, flags); + list_for_each_entry(master, &smmu_domain->devices, domain_head) + arm_smmu_install_ste_for_dev(master); + spin_unlock_irqrestore(&smmu_domain->devices_lock, flags); + + ret = 0; +out: + spin_unlock(&xen_domain->lock); + return ret; +} + static const struct iommu_ops arm_smmu_iommu_ops = { .page_sizes = PAGE_SIZE_4K, .init = arm_smmu_iommu_xen_domain_init, @@ -2921,6 +2999,7 @@ static const struct iommu_ops arm_smmu_iommu_ops = { .unmap_page = arm_iommu_unmap_page, .dt_xlate = arm_smmu_dt_xlate, .add_device = arm_smmu_add_device, + .attach_guest_config = arm_smmu_attach_guest_config }; static __init int arm_smmu_dt_init(struct dt_device_node *dev, diff --git a/xen/drivers/passthrough/arm/smmu-v3.h b/xen/drivers/passthrough/arm/smmu-v3.h index e270fe05e0..50a050408b 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.h +++ b/xen/drivers/passthrough/arm/smmu-v3.h @@ -393,6 +393,7 @@ enum arm_smmu_domain_stage { ARM_SMMU_DOMAIN_S2, ARM_SMMU_DOMAIN_NESTED, ARM_SMMU_DOMAIN_BYPASS, + ARM_SMMU_DOMAIN_ABORT, }; /* Xen specific code. */ diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c index 916b97b8a2..5188181929 100644 --- a/xen/drivers/passthrough/arm/vsmmu-v3.c +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c @@ -223,8 +223,11 @@ static int arm_vsmmu_handle_cfgi_ste(struct virt_smmu *smmu, uint64_t *cmdptr) { int ret; uint64_t ste[STRTAB_STE_DWORDS]; + struct domain *d = smmu->d; + struct domain_iommu *hd = dom_iommu(d); struct arm_vsmmu_s1_trans_cfg s1_cfg = {0}; uint32_t sid = smmu_cmd_get_sid(cmdptr[0]); + struct iommu_guest_config guest_cfg = {0}; ret = arm_vsmmu_find_ste(smmu, sid, ste); if ( ret ) @@ -234,6 +237,21 @@ static int arm_vsmmu_handle_cfgi_ste(struct virt_smmu *smmu, uint64_t *cmdptr) if ( ret ) return (ret == -EAGAIN ) ? 0 : ret; + guest_cfg.s1ctxptr = s1_cfg.s1ctxptr; + guest_cfg.s1fmt = s1_cfg.s1fmt; + guest_cfg.s1cdmax = s1_cfg.s1cdmax; + + if ( s1_cfg.bypassed ) + guest_cfg.config = ARM_SMMU_DOMAIN_BYPASS; + else if ( s1_cfg.aborted ) + guest_cfg.config = ARM_SMMU_DOMAIN_ABORT; + else + guest_cfg.config = ARM_SMMU_DOMAIN_NESTED; + + ret = hd->platform_ops->attach_guest_config(d, sid, &guest_cfg); + if ( ret ) + return ret; + return 0; } diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 4f22fc1bed..b2fc027e5e 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -230,6 +230,15 @@ int iommu_do_dt_domctl(struct xen_domctl *, struct domain *, #endif /* HAS_DEVICE_TREE */ +#ifdef CONFIG_ARM +struct iommu_guest_config { + paddr_t s1ctxptr; + uint8_t config; + uint8_t s1fmt; + uint8_t s1cdmax; +}; +#endif /* CONFIG_ARM */ + struct page_info; /* @@ -302,6 +311,11 @@ struct iommu_ops { */ int (*dt_xlate)(device_t *dev, const struct dt_phandle_args *args); #endif + +#ifdef CONFIG_ARM + int (*attach_guest_config)(struct domain *d, u32 sid, + struct iommu_guest_config *cfg); +#endif }; /* From patchwork Thu Dec 1 16:02:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061545 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0411FC43217 for ; Thu, 1 Dec 2022 16:13:01 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.450995.708560 (Exim 4.92) (envelope-from ) id 1p0mBB-0000ZR-Qf; Thu, 01 Dec 2022 16:12:53 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 450995.708560; Thu, 01 Dec 2022 16:12:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mBB-0000ZK-O0; Thu, 01 Dec 2022 16:12:53 +0000 Received: by outflank-mailman (input) for mailman id 450995; Thu, 01 Dec 2022 16:12:52 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mBA-0008UE-Gz for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:12:52 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 0111fe4b-7193-11ed-8fd2-01056ac49cbb; Thu, 01 Dec 2022 17:12:51 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 48158D6E; Thu, 1 Dec 2022 08:12:57 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D934D3F73B; Thu, 1 Dec 2022 08:12:49 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0111fe4b-7193-11ed-8fd2-01056ac49cbb From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 12/21] xen/arm: vsmmuv3: Add support for event queue and global error Date: Thu, 1 Dec 2022 16:02:36 +0000 Message-Id: <2826699f4bac16359531f43e9e6b3c71885b1ebb.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Event queue is used to send the events to guest when there is an events/ faults. Add support for event queue to send events to guest. Global error in SMMUv3 hw will be updated in smmu_gerror and smmu_gerrorn register. Add support for global error registers to send global error to guest. Signed-off-by: Rahul Singh --- xen/drivers/passthrough/arm/smmu-v3.h | 20 +++ xen/drivers/passthrough/arm/vsmmu-v3.c | 163 ++++++++++++++++++++++++- xen/include/public/arch-arm.h | 5 +- 3 files changed, 183 insertions(+), 5 deletions(-) diff --git a/xen/drivers/passthrough/arm/smmu-v3.h b/xen/drivers/passthrough/arm/smmu-v3.h index 50a050408b..b598cdeb72 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.h +++ b/xen/drivers/passthrough/arm/smmu-v3.h @@ -348,6 +348,26 @@ #define EVTQ_0_ID GENMASK_ULL(7, 0) +#define EVT_ID_BAD_STREAMID 0x02 +#define EVT_ID_BAD_STE 0x04 +#define EVT_ID_TRANSLATION_FAULT 0x10 +#define EVT_ID_ADDR_SIZE_FAULT 0x11 +#define EVT_ID_ACCESS_FAULT 0x12 +#define EVT_ID_PERMISSION_FAULT 0x13 + +#define EVTQ_0_SSV (1UL << 11) +#define EVTQ_0_SSID GENMASK_ULL(31, 12) +#define EVTQ_0_SID GENMASK_ULL(63, 32) +#define EVTQ_1_STAG GENMASK_ULL(15, 0) +#define EVTQ_1_STALL (1UL << 31) +#define EVTQ_1_PnU (1UL << 33) +#define EVTQ_1_InD (1UL << 34) +#define EVTQ_1_RnW (1UL << 35) +#define EVTQ_1_S2 (1UL << 39) +#define EVTQ_1_CLASS GENMASK_ULL(41, 40) +#define EVTQ_1_TT_READ (1UL << 44) +#define EVTQ_2_ADDR GENMASK_ULL(63, 0) +#define EVTQ_3_IPA GENMASK_ULL(51, 12) /* PRI queue */ #define PRIQ_ENT_SZ_SHIFT 4 #define PRIQ_ENT_DWORDS ((1 << PRIQ_ENT_SZ_SHIFT) >> 3) diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c index 5188181929..031c1f74b6 100644 --- a/xen/drivers/passthrough/arm/vsmmu-v3.c +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c @@ -43,6 +43,7 @@ extern const struct viommu_desc __read_mostly *cur_viommu; /* Helper Macros */ #define smmu_get_cmdq_enabled(x) FIELD_GET(CR0_CMDQEN, x) +#define smmu_get_evtq_enabled(x) FIELD_GET(CR0_EVTQEN, x) #define smmu_cmd_get_command(x) FIELD_GET(CMDQ_0_OP, x) #define smmu_cmd_get_sid(x) FIELD_GET(CMDQ_PREFETCH_0_SID, x) #define smmu_get_ste_s1cdmax(x) FIELD_GET(STRTAB_STE_0_S1CDMAX, x) @@ -51,6 +52,35 @@ extern const struct viommu_desc __read_mostly *cur_viommu; #define smmu_get_ste_s1ctxptr(x) FIELD_PREP(STRTAB_STE_0_S1CTXPTR_MASK, \ FIELD_GET(STRTAB_STE_0_S1CTXPTR_MASK, x)) +/* event queue entry */ +struct arm_smmu_evtq_ent { + /* Common fields */ + uint8_t opcode; + uint32_t sid; + + /* Event-specific fields */ + union { + struct { + uint32_t ssid; + bool ssv; + } c_bad_ste_streamid; + + struct { + bool stall; + uint16_t stag; + uint32_t ssid; + bool ssv; + bool s2; + uint64_t addr; + bool rnw; + bool pnu; + bool ind; + uint8_t class; + uint64_t addr2; + } f_translation; + }; +}; + /* stage-1 translation configuration */ struct arm_vsmmu_s1_trans_cfg { paddr_t s1ctxptr; @@ -81,6 +111,7 @@ struct virt_smmu { uint32_t strtab_base_cfg; uint64_t strtab_base; uint32_t irq_ctrl; + uint32_t virq; uint64_t gerror_irq_cfg0; uint64_t evtq_irq_cfg0; struct arm_vsmmu_queue evtq, cmdq; @@ -88,6 +119,12 @@ struct virt_smmu { }; /* Queue manipulation functions */ +static bool queue_full(struct arm_vsmmu_queue *q) +{ + return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) && + Q_WRP(q, q->prod) != Q_WRP(q, q->cons); +} + static bool queue_empty(struct arm_vsmmu_queue *q) { return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) && @@ -100,11 +137,105 @@ static void queue_inc_cons(struct arm_vsmmu_queue *q) q->cons = Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons); } +static void queue_inc_prod(struct arm_vsmmu_queue *q) +{ + u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1; + q->prod = Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod); +} + static void dump_smmu_command(uint64_t *command) { gdprintk(XENLOG_ERR, "cmd 0x%02llx: %016lx %016lx\n", smmu_cmd_get_command(command[0]), command[0], command[1]); } + +static void arm_vsmmu_inject_irq(struct virt_smmu *smmu, bool is_gerror, + uint32_t gerror_err) +{ + uint32_t new_gerrors, pending; + + if ( is_gerror ) + { + /* trigger global error irq to guest */ + pending = smmu->gerror ^ smmu->gerrorn; + new_gerrors = ~pending & gerror_err; + + /* only toggle non pending errors */ + if (!new_gerrors) + return; + + smmu->gerror ^= new_gerrors; + } + + vgic_inject_irq(smmu->d, NULL, smmu->virq, true); +} + +static int arm_vsmmu_write_evtq(struct virt_smmu *smmu, uint64_t *evt) +{ + struct arm_vsmmu_queue *q = &smmu->evtq; + struct domain *d = smmu->d; + paddr_t addr; + int ret; + + if ( !smmu_get_evtq_enabled(smmu->cr[0]) ) + return -EINVAL; + + if ( queue_full(q) ) + return -EINVAL; + + addr = Q_PROD_ENT(q); + ret = access_guest_memory_by_ipa(d, addr, evt, + sizeof(*evt) * EVTQ_ENT_DWORDS, true); + if ( ret ) + return ret; + + queue_inc_prod(q); + + /* trigger eventq irq to guest */ + if ( !queue_empty(q) ) + arm_vsmmu_inject_irq(smmu, false, 0); + + return 0; +} + +void arm_vsmmu_send_event(struct virt_smmu *smmu, + struct arm_smmu_evtq_ent *ent) +{ + uint64_t evt[EVTQ_ENT_DWORDS]; + int ret; + + memset(evt, 0, 1 << EVTQ_ENT_SZ_SHIFT); + + if ( !smmu_get_evtq_enabled(smmu->cr[0]) ) + return; + + evt[0] |= FIELD_PREP(EVTQ_0_ID, ent->opcode); + evt[0] |= FIELD_PREP(EVTQ_0_SID, ent->sid); + + switch (ent->opcode) + { + case EVT_ID_BAD_STREAMID: + case EVT_ID_BAD_STE: + evt[0] |= FIELD_PREP(EVTQ_0_SSID, ent->c_bad_ste_streamid.ssid); + evt[0] |= FIELD_PREP(EVTQ_0_SSV, ent->c_bad_ste_streamid.ssv); + break; + case EVT_ID_TRANSLATION_FAULT: + case EVT_ID_ADDR_SIZE_FAULT: + case EVT_ID_ACCESS_FAULT: + case EVT_ID_PERMISSION_FAULT: + break; + default: + gdprintk(XENLOG_WARNING, "vSMMUv3: event opcode is bad\n"); + break; + } + + ret = arm_vsmmu_write_evtq(smmu, evt); + if ( ret ) + arm_vsmmu_inject_irq(smmu, true, GERROR_EVTQ_ABT_ERR); + + return; +} + static int arm_vsmmu_find_ste(struct virt_smmu *smmu, uint32_t sid, uint64_t *ste) { @@ -113,11 +244,22 @@ static int arm_vsmmu_find_ste(struct virt_smmu *smmu, uint32_t sid, uint32_t log2size; int strtab_size_shift; int ret; + struct arm_smmu_evtq_ent ent = { + .sid = sid, + .c_bad_ste_streamid = { + .ssid = 0, + .ssv = false, + }, + }; log2size = FIELD_GET(STRTAB_BASE_CFG_LOG2SIZE, smmu->strtab_base_cfg); if ( sid >= (1 << MIN(log2size, SMMU_IDR1_SIDSIZE)) ) + { + ent.opcode = EVT_ID_BAD_STE; + arm_vsmmu_send_event(smmu, &ent); return -EINVAL; + } if ( smmu->features & STRTAB_BASE_CFG_FMT_2LVL ) { @@ -155,6 +297,8 @@ static int arm_vsmmu_find_ste(struct virt_smmu *smmu, uint32_t sid, { gdprintk(XENLOG_ERR, "idx=%d > max_l2_ste=%d\n", idx, max_l2_ste); + ent.opcode = EVT_ID_BAD_STREAMID; + arm_vsmmu_send_event(smmu, &ent); return -EINVAL; } addr = l2ptr + idx * sizeof(*ste) * STRTAB_STE_DWORDS; @@ -182,6 +326,14 @@ static int arm_vsmmu_decode_ste(struct virt_smmu *smmu, uint32_t sid, uint64_t *ste) { uint64_t val = ste[0]; + struct arm_smmu_evtq_ent ent = { + .opcode = EVT_ID_BAD_STE, + .sid = sid, + .c_bad_ste_streamid = { + .ssid = 0, + .ssv = false, + }, + }; if ( !(val & STRTAB_STE_0_V) ) return -EAGAIN; @@ -216,6 +368,7 @@ static int arm_vsmmu_decode_ste(struct virt_smmu *smmu, uint32_t sid, return 0; bad_ste: + arm_vsmmu_send_event(smmu, &ent); return -EINVAL; } @@ -572,7 +725,8 @@ static const struct mmio_handler_ops vsmmuv3_mmio_handler = { .write = vsmmuv3_mmio_write, }; -static int vsmmuv3_init_single(struct domain *d, paddr_t addr, paddr_t size) +static int vsmmuv3_init_single(struct domain *d, paddr_t addr, + paddr_t size, uint32_t virq) { struct virt_smmu *smmu; @@ -581,6 +735,7 @@ static int vsmmuv3_init_single(struct domain *d, paddr_t addr, paddr_t size) return -ENOMEM; smmu->d = d; + smmu->virq = virq; smmu->cmdq.q_base = FIELD_PREP(Q_BASE_LOG2SIZE, SMMU_CMDQS); smmu->cmdq.ent_size = CMDQ_ENT_DWORDS * DWORDS_BYTES; smmu->evtq.q_base = FIELD_PREP(Q_BASE_LOG2SIZE, SMMU_EVTQS); @@ -607,14 +762,16 @@ int domain_vsmmuv3_init(struct domain *d) list_for_each_entry(hw_iommu, &host_iommu_list, entry) { - ret = vsmmuv3_init_single(d, hw_iommu->addr, hw_iommu->size); + ret = vsmmuv3_init_single(d, hw_iommu->addr, hw_iommu->size, + hw_iommu->irq); if ( ret ) return ret; } } else { - ret = vsmmuv3_init_single(d, GUEST_VSMMUV3_BASE, GUEST_VSMMUV3_SIZE); + ret = vsmmuv3_init_single(d, GUEST_VSMMUV3_BASE, GUEST_VSMMUV3_SIZE, + GUEST_VSMMU_SPI); if ( ret ) return ret; } diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h index 24b52fa017..634f689e77 100644 --- a/xen/include/public/arch-arm.h +++ b/xen/include/public/arch-arm.h @@ -488,9 +488,10 @@ typedef uint64_t xen_callback_t; #define GUEST_EVTCHN_PPI 31 #define GUEST_VPL011_SPI 32 +#define GUEST_VSMMU_SPI 33 -#define GUEST_VIRTIO_MMIO_SPI_FIRST 33 -#define GUEST_VIRTIO_MMIO_SPI_LAST 43 +#define GUEST_VIRTIO_MMIO_SPI_FIRST 34 +#define GUEST_VIRTIO_MMIO_SPI_LAST 44 /* PSCI functions */ #define PSCI_cpu_suspend 0 From patchwork Thu Dec 1 16:02:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061546 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 887C4C43217 for ; Thu, 1 Dec 2022 16:13:40 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.451003.708572 (Exim 4.92) (envelope-from ) id 1p0mBn-0001JP-43; Thu, 01 Dec 2022 16:13:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 451003.708572; Thu, 01 Dec 2022 16:13:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mBn-0001JE-0K; Thu, 01 Dec 2022 16:13:31 +0000 Received: by outflank-mailman (input) for mailman id 451003; Thu, 01 Dec 2022 16:13:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mBl-0008UE-6q for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:13:29 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 171bfda8-7193-11ed-8fd2-01056ac49cbb; Thu, 01 Dec 2022 17:13:28 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 417E5D6E; Thu, 1 Dec 2022 08:13:34 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D2B1C3F73B; Thu, 1 Dec 2022 08:13:26 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 171bfda8-7193-11ed-8fd2-01056ac49cbb From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 13/21] xen/arm: vsmmuv3: Add "iommus" property node for dom0 devices Date: Thu, 1 Dec 2022 16:02:37 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 "iommus" property will be added for dom0 devices to virtual IOMMU node to enable the dom0 linux kernel to configure the IOMMU Signed-off-by: Rahul Singh --- xen/arch/arm/domain_build.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index abbaf37a2e..a5295e8c3e 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -1172,9 +1172,12 @@ static int __init write_properties(struct domain *d, struct kernel_info *kinfo, continue; } - if ( iommu_node ) + /* + * Expose IOMMU specific properties to hwdom when vIOMMU is + * enabled. + */ + if ( iommu_node && !is_viommu_enabled() ) { - /* Don't expose IOMMU specific properties to hwdom */ if ( dt_property_name_is_equal(prop, "iommus") ) continue; From patchwork Thu Dec 1 16:02:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D01C3C43217 for ; Thu, 1 Dec 2022 16:20:38 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.451056.708671 (Exim 4.92) (envelope-from ) id 1p0mIZ-0007aX-SR; Thu, 01 Dec 2022 16:20:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 451056.708671; Thu, 01 Dec 2022 16:20:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mIZ-0007aO-PJ; Thu, 01 Dec 2022 16:20:31 +0000 Received: by outflank-mailman (input) for mailman id 451056; Thu, 01 Dec 2022 16:20:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mCF-00075z-0U for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:13:59 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 28c57f2a-7193-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:13:57 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F1260D6E; Thu, 1 Dec 2022 08:14:03 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8E7EC3F73B; Thu, 1 Dec 2022 08:13:56 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 28c57f2a-7193-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 14/21] xen/arm: vIOMMU: IOMMU device tree node for dom0 Date: Thu, 1 Dec 2022 16:02:38 +0000 Message-Id: <544b8450c977f6d005f1d9adee8e0ff33b9bd3ec.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 XEN will create an IOMMU device tree node in the device tree to enable the dom0 to discover the virtual SMMUv3 during dom0 boot. IOMMU device tree node will only be created when cmdline option viommu is enabled. Signed-off-by: Rahul Singh --- xen/arch/arm/domain_build.c | 94 +++++++++++++++++++++++++++++++ xen/arch/arm/include/asm/viommu.h | 1 + 2 files changed, 95 insertions(+) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index a5295e8c3e..b82121beb5 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -2233,6 +2233,95 @@ int __init make_chosen_node(const struct kernel_info *kinfo) return res; } +#ifdef CONFIG_VIRTUAL_IOMMU +static int make_hwdom_viommu_node(const struct kernel_info *kinfo) +{ + uint32_t len; + int res; + char buf[24]; + void *fdt = kinfo->fdt; + const void *prop = NULL; + const struct dt_device_node *iommu = NULL; + struct host_iommu *iommu_data; + gic_interrupt_t intr; + + if ( list_empty(&host_iommu_list) ) + return 0; + + list_for_each_entry( iommu_data, &host_iommu_list, entry ) + { + if ( iommu_data->hwdom_node_created ) + return 0; + + iommu = iommu_data->dt_node; + + snprintf(buf, sizeof(buf), "iommu@%"PRIx64, iommu_data->addr); + + res = fdt_begin_node(fdt, buf); + if ( res ) + return res; + + prop = dt_get_property(iommu, "compatible", &len); + if ( !prop ) + { + res = -FDT_ERR_XEN(ENOENT); + return res; + } + + res = fdt_property(fdt, "compatible", prop, len); + if ( res ) + return res; + + if ( iommu->phandle ) + { + res = fdt_property_cell(fdt, "phandle", iommu->phandle); + if ( res ) + return res; + } + + /* Use the same reg regions as the IOMMU node in host DTB. */ + prop = dt_get_property(iommu, "reg", &len); + if ( !prop ) + { + printk(XENLOG_ERR "vIOMMU: Can't find IOMMU reg property.\n"); + res = -FDT_ERR_XEN(ENOENT); + return res; + } + + res = fdt_property(fdt, "reg", prop, len); + if ( res ) + return res; + + prop = dt_get_property(iommu, "#iommu-cells", &len); + if ( !prop ) + { + res = -FDT_ERR_XEN(ENOENT); + return res; + } + + res = fdt_property(fdt, "#iommu-cells", prop, len); + if ( res ) + return res; + + res = fdt_property_string(fdt, "interrupt-names", "combined"); + if ( res ) + return res; + + set_interrupt(intr, iommu_data->irq, 0xf, DT_IRQ_TYPE_LEVEL_HIGH); + + res = fdt_property_interrupts(kinfo, &intr, 1); + if ( res ) + return res; + + iommu_data->hwdom_node_created = true; + + fdt_end_node(fdt); + } + + return res; +} +#endif + int __init map_irq_to_domain(struct domain *d, unsigned int irq, bool need_mapping, const char *devname) { @@ -2587,6 +2676,11 @@ static int __init handle_node(struct domain *d, struct kernel_info *kinfo, if ( dt_match_node(timer_matches, node) ) return make_timer_node(kinfo); +#ifdef CONFIG_VIRTUAL_IOMMU + if ( device_get_class(node) == DEVICE_IOMMU && is_viommu_enabled() ) + return make_hwdom_viommu_node(kinfo); +#endif + /* Skip nodes used by Xen */ if ( dt_device_used_by(node) == DOMID_XEN ) { diff --git a/xen/arch/arm/include/asm/viommu.h b/xen/arch/arm/include/asm/viommu.h index 4de4cceeda..e6018f435b 100644 --- a/xen/arch/arm/include/asm/viommu.h +++ b/xen/arch/arm/include/asm/viommu.h @@ -19,6 +19,7 @@ struct host_iommu { paddr_t addr; paddr_t size; uint32_t irq; + bool hwdom_node_created; }; struct viommu_ops { From patchwork Thu Dec 1 16:02:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B862FC4321E for ; Thu, 1 Dec 2022 16:15:47 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.451008.708583 (Exim 4.92) (envelope-from ) id 1p0mDs-00022f-HJ; Thu, 01 Dec 2022 16:15:40 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 451008.708583; Thu, 01 Dec 2022 16:15:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mDs-00022Y-DA; Thu, 01 Dec 2022 16:15:40 +0000 Received: by outflank-mailman (input) for mailman id 451008; Thu, 01 Dec 2022 16:15:39 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mDr-00020U-6D for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:15:39 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 647e17e1-7193-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:15:38 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 37060D6E; Thu, 1 Dec 2022 08:15:44 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5CFB73F73B; Thu, 1 Dec 2022 08:15:36 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 647e17e1-7193-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu Subject: [RFC PATCH 15/21] xen/arm: vsmmuv3: Emulated SMMUv3 device tree node for dom0less Date: Thu, 1 Dec 2022 16:02:39 +0000 Message-Id: <4e4d4fff4bb20d9718bd61b729f9421525baaa15.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 XEN will create an Emulated SMMUv3 device tree node in the device tree to enable the dom0less domains to discover the virtual SMMUv3 during boot. Emulated SMMUv3 device tree node will only be created when cmdline option vsmmuv3 is enabled. Signed-off-by: Rahul Singh --- xen/arch/arm/domain_build.c | 52 +++++++++++++++++++++++++++ xen/include/public/device_tree_defs.h | 1 + 2 files changed, 53 insertions(+) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index b82121beb5..29f00b18ec 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -2322,6 +2322,49 @@ static int make_hwdom_viommu_node(const struct kernel_info *kinfo) } #endif +#ifdef CONFIG_VIRTUAL_ARM_SMMU_V3 +static int __init make_vsmmuv3_node(const struct kernel_info *kinfo) +{ + int res; + char buf[24]; + __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS]; + __be32 *cells; + void *fdt = kinfo->fdt; + + snprintf(buf, sizeof(buf), "iommu@%llx", GUEST_VSMMUV3_BASE); + + res = fdt_begin_node(fdt, buf); + if ( res ) + return res; + + res = fdt_property_string(fdt, "compatible", "arm,smmu-v3"); + if ( res ) + return res; + + /* Create reg property */ + cells = ®[0]; + dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS, + GUEST_VSMMUV3_BASE, GUEST_VSMMUV3_SIZE); + res = fdt_property(fdt, "reg", reg, + (GUEST_ROOT_ADDRESS_CELLS + + GUEST_ROOT_SIZE_CELLS) * sizeof(*reg)); + if ( res ) + return res; + + res = fdt_property_cell(fdt, "phandle", GUEST_PHANDLE_VSMMUV3); + if ( res ) + return res; + + res = fdt_property_cell(fdt, "#iommu-cells", 1); + if ( res ) + return res; + + res = fdt_end_node(fdt); + + return res; +} +#endif + int __init map_irq_to_domain(struct domain *d, unsigned int irq, bool need_mapping, const char *devname) { @@ -3395,6 +3438,15 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo) goto err; } +#ifdef CONFIG_VIRTUAL_ARM_SMMU_V3 + if ( is_viommu_enabled() ) + { + ret = make_vsmmuv3_node(kinfo); + if ( ret ) + goto err; + } +#endif + ret = fdt_end_node(kinfo->fdt); if ( ret < 0 ) goto err; diff --git a/xen/include/public/device_tree_defs.h b/xen/include/public/device_tree_defs.h index 9e80d0499d..7846a0425c 100644 --- a/xen/include/public/device_tree_defs.h +++ b/xen/include/public/device_tree_defs.h @@ -14,6 +14,7 @@ */ #define GUEST_PHANDLE_GIC (65000) #define GUEST_PHANDLE_IOMMU (GUEST_PHANDLE_GIC + 1) +#define GUEST_PHANDLE_VSMMUV3 (GUEST_PHANDLE_IOMMU + 1) #define GUEST_ROOT_ADDRESS_CELLS 2 #define GUEST_ROOT_SIZE_CELLS 2 From patchwork Thu Dec 1 16:02:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061548 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9BB75C47088 for ; Thu, 1 Dec 2022 16:17:10 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.451016.708593 (Exim 4.92) (envelope-from ) id 1p0mFC-0002dR-Qm; Thu, 01 Dec 2022 16:17:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 451016.708593; Thu, 01 Dec 2022 16:17:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mFC-0002dK-O7; Thu, 01 Dec 2022 16:17:02 +0000 Received: by outflank-mailman (input) for mailman id 451016; Thu, 01 Dec 2022 16:17:01 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mFB-0002cy-II for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:17:01 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 95b73105-7193-11ed-8fd2-01056ac49cbb; Thu, 01 Dec 2022 17:17:00 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B6602D6E; Thu, 1 Dec 2022 08:17:06 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6E6273F73B; Thu, 1 Dec 2022 08:16:59 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 95b73105-7193-11ed-8fd2-01056ac49cbb From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Wei Liu , Anthony PERARD , Juergen Gross Subject: [RFC PATCH 16/21] arm/libxl: vsmmuv3: Emulated SMMUv3 device tree node in libxl Date: Thu, 1 Dec 2022 16:02:40 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 libxl will create an Emulated SMMUv3 device tree node in the device tree to enable the guest OS to discover the virtual SMMUv3 during guest boot. Emulated SMMUv3 device tree node will only be created when "viommu=smmuv3" is set in xl domain configuration. Signed-off-by: Rahul Singh --- tools/libs/light/libxl_arm.c | 39 ++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index b8eff10a41..00fcbd466c 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -831,6 +831,36 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *fdt, return 0; } +static int make_vsmmuv3_node(libxl__gc *gc, void *fdt, + const struct arch_info *ainfo, + struct xc_dom_image *dom) +{ + int res; + const char *name = GCSPRINTF("iommu@%llx", GUEST_VSMMUV3_BASE); + + res = fdt_begin_node(fdt, name); + if (res) return res; + + res = fdt_property_compat(gc, fdt, 1, "arm,smmu-v3"); + if (res) return res; + + res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, + GUEST_ROOT_SIZE_CELLS, 1, GUEST_VSMMUV3_BASE, + GUEST_VSMMUV3_SIZE); + if (res) return res; + + res = fdt_property_cell(fdt, "phandle", GUEST_PHANDLE_VSMMUV3); + if (res) return res; + + res = fdt_property_cell(fdt, "#iommu-cells", 1); + if (res) return res; + + res = fdt_end_node(fdt); + if (res) return res; + + return 0; +} + static int make_vpci_node(libxl__gc *gc, void *fdt, const struct arch_info *ainfo, struct xc_dom_image *dom) @@ -872,6 +902,12 @@ static int make_vpci_node(libxl__gc *gc, void *fdt, GUEST_VPCI_PREFETCH_MEM_SIZE); if (res) return res; + if (res) return res; + + res = fdt_property_values(gc, fdt, "iommu-map", 4, 0, + GUEST_PHANDLE_VSMMUV3, 0, 0x10000); + if (res) return res; + res = fdt_end_node(fdt); if (res) return res; @@ -1251,6 +1287,9 @@ next_resize: if (d_config->num_pcidevs) FDT( make_vpci_node(gc, fdt, ainfo, dom) ); + if (info->arch_arm.viommu_type == LIBXL_VIOMMU_TYPE_SMMUV3) + FDT( make_vsmmuv3_node(gc, fdt, ainfo, dom) ); + iommu_created = false; for (i = 0; i < d_config->num_disks; i++) { libxl_device_disk *disk = &d_config->disks[i]; From patchwork Thu Dec 1 16:02:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 03095C43217 for ; Thu, 1 Dec 2022 16:18:12 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.451021.708605 (Exim 4.92) (envelope-from ) id 1p0mG5-0003BI-44; Thu, 01 Dec 2022 16:17:57 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 451021.708605; Thu, 01 Dec 2022 16:17:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mG5-0003BB-0a; Thu, 01 Dec 2022 16:17:57 +0000 Received: by outflank-mailman (input) for mailman id 451021; Thu, 01 Dec 2022 16:17:56 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mG3-0002as-R2 for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:17:55 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id b5fbf219-7193-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:17:54 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DB3DCD6E; Thu, 1 Dec 2022 08:18:00 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 289A73F73B; Thu, 1 Dec 2022 08:17:53 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b5fbf219-7193-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Wei Liu , Anthony PERARD , Juergen Gross , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 17/21] xen/arm: vsmmuv3: Alloc virq for virtual SMMUv3 Date: Thu, 1 Dec 2022 16:02:41 +0000 Message-Id: <1eb767c65e4ca07c6d10c7dc2cdb514535a4b484.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Alloc and reserve virq for event queue and global error to send event to guests. Also Modify the libxl to accomadate the new define virq. Signed-off-by: Rahul Singh --- tools/libs/light/libxl_arm.c | 24 ++++++++++++++++++++++-- xen/arch/arm/domain_build.c | 11 +++++++++++ xen/drivers/passthrough/arm/vsmmu-v3.c | 10 ++++++++++ 3 files changed, 43 insertions(+), 2 deletions(-) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index 00fcbd466c..f2bb7d40dc 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -66,8 +66,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, { uint32_t nr_spis = 0; unsigned int i; - uint32_t vuart_irq, virtio_irq = 0; - bool vuart_enabled = false, virtio_enabled = false; + uint32_t vuart_irq, virtio_irq = 0, vsmmu_irq = 0; + bool vuart_enabled = false, virtio_enabled = false, vsmmu_enabled = false; uint64_t virtio_mmio_base = GUEST_VIRTIO_MMIO_BASE; uint32_t virtio_mmio_irq = GUEST_VIRTIO_MMIO_SPI_FIRST; @@ -81,6 +81,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, vuart_enabled = true; } + if (d_config->num_pcidevs || d_config->b_info.device_tree) { + nr_spis += (GUEST_VSMMU_SPI - 32) + 1; + vsmmu_irq = GUEST_VSMMU_SPI; + vsmmu_enabled = true; + } + for (i = 0; i < d_config->num_disks; i++) { libxl_device_disk *disk = &d_config->disks[i]; @@ -136,6 +142,11 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc, return ERROR_FAIL; } + if (vsmmu_enabled && irq == vsmmu_irq) { + LOG(ERROR, "Physical IRQ %u conflicting with vSMMUv3 SPI\n", irq); + return ERROR_FAIL; + } + if (irq < 32) continue; @@ -837,6 +848,7 @@ static int make_vsmmuv3_node(libxl__gc *gc, void *fdt, { int res; const char *name = GCSPRINTF("iommu@%llx", GUEST_VSMMUV3_BASE); + gic_interrupt intr; res = fdt_begin_node(fdt, name); if (res) return res; @@ -855,6 +867,14 @@ static int make_vsmmuv3_node(libxl__gc *gc, void *fdt, res = fdt_property_cell(fdt, "#iommu-cells", 1); if (res) return res; + res = fdt_property_string(fdt, "interrupt-names", "combined"); + if (res) return res; + + set_interrupt(intr, GUEST_VSMMU_SPI, 0xf, DT_IRQ_TYPE_LEVEL_HIGH); + + res = fdt_property_interrupts(gc, fdt, &intr, 1); + if (res) return res; + res = fdt_end_node(fdt); if (res) return res; diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 29f00b18ec..8e85fb7854 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -2329,6 +2329,7 @@ static int __init make_vsmmuv3_node(const struct kernel_info *kinfo) char buf[24]; __be32 reg[GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS]; __be32 *cells; + gic_interrupt_t intr; void *fdt = kinfo->fdt; snprintf(buf, sizeof(buf), "iommu@%llx", GUEST_VSMMUV3_BASE); @@ -2359,6 +2360,16 @@ static int __init make_vsmmuv3_node(const struct kernel_info *kinfo) if ( res ) return res; + res = fdt_property_string(fdt, "interrupt-names", "combined"); + if ( res ) + return res; + + set_interrupt(intr, GUEST_VSMMU_SPI, 0xf, DT_IRQ_TYPE_LEVEL_HIGH); + + res = fdt_property_interrupts(kinfo, &intr, 1); + if ( res ) + return res; + res = fdt_end_node(fdt); return res; diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c index 031c1f74b6..b280b70da0 100644 --- a/xen/drivers/passthrough/arm/vsmmu-v3.c +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c @@ -728,6 +728,7 @@ static const struct mmio_handler_ops vsmmuv3_mmio_handler = { static int vsmmuv3_init_single(struct domain *d, paddr_t addr, paddr_t size, uint32_t virq) { + int ret; struct virt_smmu *smmu; smmu = xzalloc(struct virt_smmu); @@ -743,12 +744,21 @@ static int vsmmuv3_init_single(struct domain *d, paddr_t addr, spin_lock_init(&smmu->cmd_queue_lock); + ret = vgic_reserve_virq(d, virq); + if ( !ret ) + goto out; + register_mmio_handler(d, &vsmmuv3_mmio_handler, addr, size, smmu); /* Register the vIOMMU to be able to clean it up later. */ list_add_tail(&smmu->viommu_list, &d->arch.viommu_list); return 0; + +out: + xfree(smmu); + vgic_free_virq(d, virq); + return ret; } int domain_vsmmuv3_init(struct domain *d) From patchwork Thu Dec 1 16:02:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061554 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 71738C43217 for ; Thu, 1 Dec 2022 16:18:39 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.451029.708616 (Exim 4.92) (envelope-from ) id 1p0mGd-0003l1-FQ; Thu, 01 Dec 2022 16:18:31 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 451029.708616; Thu, 01 Dec 2022 16:18:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mGd-0003ku-CD; Thu, 01 Dec 2022 16:18:31 +0000 Received: by outflank-mailman (input) for mailman id 451029; Thu, 01 Dec 2022 16:18:30 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mGc-0002as-1T for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:18:30 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id ca708677-7193-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:18:29 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3BA32D6E; Thu, 1 Dec 2022 08:18:35 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CD0FE3F73B; Thu, 1 Dec 2022 08:18:27 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ca708677-7193-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 18/21] xen/arm: iommu: skip the iommu-map property for PCI devices Date: Thu, 1 Dec 2022 16:02:42 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Current code skip the IOMMUS specific properties for the non PCI devices when handling the dom0 node but there is no support to skip the IOMMUS specific properties for the PCI devices. This patch will add the support to skip the IOMMUS specific properties for the PCI devices. Signed-off-by: Rahul Singh --- xen/arch/arm/domain_build.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 8e85fb7854..7cd99a6771 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -1112,9 +1112,18 @@ static int __init write_properties(struct domain *d, struct kernel_info *kinfo, * Use "iommu_node" as an indicator of the master device which properties * should be skipped. */ - iommu_node = dt_parse_phandle(node, "iommus", 0); - if ( iommu_node && device_get_class(iommu_node) != DEVICE_IOMMU ) - iommu_node = NULL; + if ( dt_device_type_is_equal(node, "pci") ) + { + iommu_node = dt_parse_phandle(node, "iommu-map", 1); + if ( iommu_node && device_get_class(iommu_node) != DEVICE_IOMMU ) + iommu_node = NULL; + } + else + { + iommu_node = dt_parse_phandle(node, "iommus", 0); + if ( iommu_node && device_get_class(iommu_node) != DEVICE_IOMMU ) + iommu_node = NULL; + } dt_for_each_property_node (node, prop) { From patchwork Thu Dec 1 16:02:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E8CCAC47088 for ; Thu, 1 Dec 2022 16:19:24 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.451035.708627 (Exim 4.92) (envelope-from ) id 1p0mHK-0004Ll-OH; Thu, 01 Dec 2022 16:19:14 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 451035.708627; Thu, 01 Dec 2022 16:19:14 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mHK-0004Le-L9; Thu, 01 Dec 2022 16:19:14 +0000 Received: by outflank-mailman (input) for mailman id 451035; Thu, 01 Dec 2022 16:19:12 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mHI-0003Vx-M5 for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:19:12 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id e3beadba-7193-11ed-8fd2-01056ac49cbb; Thu, 01 Dec 2022 17:19:11 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 820CED6E; Thu, 1 Dec 2022 08:19:17 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1F4073F73B; Thu, 1 Dec 2022 08:19:10 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e3beadba-7193-11ed-8fd2-01056ac49cbb From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Bertrand Marquis , Stefano Stabellini , Julien Grall , Volodymyr Babchuk Subject: [RFC PATCH 19/21] xen/arm: vsmmuv3: Add support to send stage-1 event to guest Date: Thu, 1 Dec 2022 16:02:43 +0000 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Stage-1 translation is handled by guest, therefore stage-1 fault has to be forwarded to guest. Signed-off-by: Rahul Singh --- xen/drivers/passthrough/arm/smmu-v3.c | 48 ++++++++++++++++++++++++-- xen/drivers/passthrough/arm/vsmmu-v3.c | 45 ++++++++++++++++++++++++ xen/drivers/passthrough/arm/vsmmu-v3.h | 12 +++++++ 3 files changed, 103 insertions(+), 2 deletions(-) diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c index c4b4a5d86d..e17debc456 100644 --- a/xen/drivers/passthrough/arm/smmu-v3.c +++ b/xen/drivers/passthrough/arm/smmu-v3.c @@ -871,7 +871,6 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return 0; } -__maybe_unused static struct arm_smmu_master * arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) { @@ -892,10 +891,51 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) return NULL; } +static int arm_smmu_handle_evt(struct arm_smmu_device *smmu, u64 *evt) +{ + int ret; + struct arm_smmu_master *master; + u32 sid = FIELD_GET(EVTQ_0_SID, evt[0]); + + switch (FIELD_GET(EVTQ_0_ID, evt[0])) { + case EVT_ID_TRANSLATION_FAULT: + break; + case EVT_ID_ADDR_SIZE_FAULT: + break; + case EVT_ID_ACCESS_FAULT: + break; + case EVT_ID_PERMISSION_FAULT: + break; + default: + return -EOPNOTSUPP; + } + + /* Stage-2 event */ + if (evt[1] & EVTQ_1_S2) + return -EFAULT; + + mutex_lock(&smmu->streams_mutex); + master = arm_smmu_find_master(smmu, sid); + if (!master) { + ret = -EINVAL; + goto out_unlock; + } + + ret = arm_vsmmu_handle_evt(master->domain->d, smmu->dev, evt); + if (ret) { + ret = -EINVAL; + goto out_unlock; + } + +out_unlock: + mutex_unlock(&smmu->streams_mutex); + return ret; +} + /* IRQ and event handlers */ static void arm_smmu_evtq_tasklet(void *dev) { - int i; + int i, ret; struct arm_smmu_device *smmu = dev; struct arm_smmu_queue *q = &smmu->evtq.q; struct arm_smmu_ll_queue *llq = &q->llq; @@ -905,6 +945,10 @@ static void arm_smmu_evtq_tasklet(void *dev) while (!queue_remove_raw(q, evt)) { u8 id = FIELD_GET(EVTQ_0_ID, evt[0]); + ret = arm_smmu_handle_evt(smmu, evt); + if (!ret) + continue; + dev_info(smmu->dev, "event 0x%02x received:\n", id); for (i = 0; i < ARRAY_SIZE(evt); ++i) dev_info(smmu->dev, "\t0x%016llx\n", diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.c b/xen/drivers/passthrough/arm/vsmmu-v3.c index b280b70da0..cd8b62d806 100644 --- a/xen/drivers/passthrough/arm/vsmmu-v3.c +++ b/xen/drivers/passthrough/arm/vsmmu-v3.c @@ -102,6 +102,7 @@ struct arm_vsmmu_queue { struct virt_smmu { struct domain *d; struct list_head viommu_list; + paddr_t addr; uint8_t sid_split; uint32_t features; uint32_t cr[3]; @@ -236,6 +237,49 @@ void arm_vsmmu_send_event(struct virt_smmu *smmu, return; } +static struct virt_smmu *vsmmuv3_find_by_addr(struct domain *d, paddr_t paddr) +{ + struct virt_smmu *smmu; + + list_for_each_entry( smmu, &d->arch.viommu_list, viommu_list ) + { + if ( smmu->addr == paddr ) + return smmu; + } + + return NULL; +} + +int arm_vsmmu_handle_evt(struct domain *d, struct device *dev, uint64_t *evt) +{ + int ret; + struct virt_smmu *smmu; + + if ( is_hardware_domain(d) ) + { + paddr_t paddr; + /* Base address */ + ret = dt_device_get_address(dev_to_dt(dev), 0, &paddr, NULL); + if ( ret ) + return -EINVAL; + + smmu = vsmmuv3_find_by_addr(d, paddr); + if ( !smmu ) + return -ENODEV; + } + else + { + smmu = list_entry(d->arch.viommu_list.next, + struct virt_smmu, viommu_list); + } + + ret = arm_vsmmu_write_evtq(smmu, evt); + if ( ret ) + arm_vsmmu_inject_irq(smmu, true, GERROR_EVTQ_ABT_ERR); + + return 0; +} + static int arm_vsmmu_find_ste(struct virt_smmu *smmu, uint32_t sid, uint64_t *ste) { @@ -737,6 +781,7 @@ static int vsmmuv3_init_single(struct domain *d, paddr_t addr, smmu->d = d; smmu->virq = virq; + smmu->addr = addr; smmu->cmdq.q_base = FIELD_PREP(Q_BASE_LOG2SIZE, SMMU_CMDQS); smmu->cmdq.ent_size = CMDQ_ENT_DWORDS * DWORDS_BYTES; smmu->evtq.q_base = FIELD_PREP(Q_BASE_LOG2SIZE, SMMU_EVTQS); diff --git a/xen/drivers/passthrough/arm/vsmmu-v3.h b/xen/drivers/passthrough/arm/vsmmu-v3.h index e11f85b431..c7bfd3fb59 100644 --- a/xen/drivers/passthrough/arm/vsmmu-v3.h +++ b/xen/drivers/passthrough/arm/vsmmu-v3.h @@ -8,6 +8,12 @@ void vsmmuv3_set_type(void); +static inline int arm_vsmmu_handle_evt(struct domain *d, + struct device *dev, uint64_t *evt) +{ + return -EINVAL; +} + #else static inline void vsmmuv3_set_type(void) @@ -15,6 +21,12 @@ static inline void vsmmuv3_set_type(void) return; } +static inline int arm_vsmmu_handle_evt(struct domain *d, + struct device *dev, uint64_t *evt) +{ + return -EINVAL; +} + #endif /* CONFIG_VIRTUAL_ARM_SMMU_V3 */ #endif /* __ARCH_ARM_VSMMU_V3_H__ */ From patchwork Thu Dec 1 16:02:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 89289C4321E for ; Thu, 1 Dec 2022 16:19:48 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.451038.708637 (Exim 4.92) (envelope-from ) id 1p0mHm-0004q5-0B; Thu, 01 Dec 2022 16:19:42 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 451038.708637; Thu, 01 Dec 2022 16:19:41 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mHl-0004py-TT; Thu, 01 Dec 2022 16:19:41 +0000 Received: by outflank-mailman (input) for mailman id 451038; Thu, 01 Dec 2022 16:19:40 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mHk-0003Vx-Od for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:19:40 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id f4a24343-7193-11ed-8fd2-01056ac49cbb; Thu, 01 Dec 2022 17:19:39 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 05FDAD6E; Thu, 1 Dec 2022 08:19:46 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B1F5E3F73B; Thu, 1 Dec 2022 08:19:38 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f4a24343-7193-11ed-8fd2-01056ac49cbb From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Wei Liu , Anthony PERARD , Juergen Gross Subject: [RFC PATCH 20/21] libxl/arm: vIOMMU: Modify the partial device tree for iommus Date: Thu, 1 Dec 2022 16:02:44 +0000 Message-Id: <45ac5b639319b7282086bac609329ca3c5a411bf.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 To configure IOMMU in guest for passthrough devices, user will need to copy the unmodified "iommus" property from host device tree to partial device tree. To enable the dom0 linux kernel to confiure the IOMMU correctly replace the phandle in partial device tree with virtual IOMMU phandle when "iommus" property is set. Signed-off-by: Rahul Singh --- tools/libs/light/libxl_arm.c | 47 +++++++++++++++++++++++++++++++++++- 1 file changed, 46 insertions(+), 1 deletion(-) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index f2bb7d40dc..16d068404f 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -1167,6 +1167,41 @@ static int copy_partial_fdt(libxl__gc *gc, void *fdt, void *pfdt) return 0; } +static int modify_partial_fdt(libxl__gc *gc, void *pfdt) +{ + int nodeoff, proplen, i, r; + const fdt32_t *prop; + fdt32_t *prop_c; + + nodeoff = fdt_path_offset(pfdt, "/passthrough"); + if (nodeoff < 0) + return nodeoff; + + for (nodeoff = fdt_first_subnode(pfdt, nodeoff); + nodeoff >= 0; + nodeoff = fdt_next_subnode(pfdt, nodeoff)) { + + prop = fdt_getprop(pfdt, nodeoff, "iommus", &proplen); + if (!prop) + continue; + + prop_c = libxl__zalloc(gc, proplen); + + for (i = 0; i < proplen / 8; ++i) { + prop_c[i * 2] = cpu_to_fdt32(GUEST_PHANDLE_VSMMUV3); + prop_c[i * 2 + 1] = prop[i * 2 + 1]; + } + + r = fdt_setprop(pfdt, nodeoff, "iommus", prop_c, proplen); + if (r) { + LOG(ERROR, "Can't set the iommus property in partial FDT"); + return r; + } + } + + return 0; +} + #else static int check_partial_fdt(libxl__gc *gc, void *fdt, size_t size) @@ -1185,6 +1220,13 @@ static int copy_partial_fdt(libxl__gc *gc, void *fdt, void *pfdt) return -FDT_ERR_INTERNAL; } +static int modify_partial_fdt(libxl__gc *gc, void *pfdt) +{ + LOG(ERROR, "partial device tree not supported"); + + return ERROR_FAIL; +} + #endif /* ENABLE_PARTIAL_DEVICE_TREE */ #define FDT_MAX_SIZE (1<<20) @@ -1307,8 +1349,11 @@ next_resize: if (d_config->num_pcidevs) FDT( make_vpci_node(gc, fdt, ainfo, dom) ); - if (info->arch_arm.viommu_type == LIBXL_VIOMMU_TYPE_SMMUV3) + if (info->arch_arm.viommu_type == LIBXL_VIOMMU_TYPE_SMMUV3) { FDT( make_vsmmuv3_node(gc, fdt, ainfo, dom) ); + if (pfdt) + FDT( modify_partial_fdt(gc, pfdt) ); + } iommu_created = false; for (i = 0; i < d_config->num_disks; i++) { From patchwork Thu Dec 1 16:02:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Singh X-Patchwork-Id: 13061557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2F8D4C43217 for ; Thu, 1 Dec 2022 16:20:10 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.451043.708649 (Exim 4.92) (envelope-from ) id 1p0mI6-0005R8-9b; Thu, 01 Dec 2022 16:20:02 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 451043.708649; Thu, 01 Dec 2022 16:20:02 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mI6-0005QP-5h; Thu, 01 Dec 2022 16:20:02 +0000 Received: by outflank-mailman (input) for mailman id 451043; Thu, 01 Dec 2022 16:20:01 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1p0mI5-00056N-1n for xen-devel@lists.xenproject.org; Thu, 01 Dec 2022 16:20:01 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 00b98bdb-7194-11ed-91b6-6bf2151ebd3b; Thu, 01 Dec 2022 17:20:00 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 47ED4D6E; Thu, 1 Dec 2022 08:20:06 -0800 (PST) Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com [10.1.199.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D95343F73B; Thu, 1 Dec 2022 08:19:58 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 00b98bdb-7194-11ed-91b6-6bf2151ebd3b From: Rahul Singh To: xen-devel@lists.xenproject.org Cc: Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk Subject: [RFC PATCH 21/21] xen/arm: vIOMMU: Modify the partial device tree for dom0less Date: Thu, 1 Dec 2022 16:02:45 +0000 Message-Id: <127da5a0d4300e083b8840a4f3a0d2d63bde5b6f.1669888522.git.rahul.singh@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 To configure IOMMU in guest for passthrough devices, user will need to copy the unmodified "iommus" property from host device tree to partial device tree. To enable the dom0 linux kernel to confiure the IOMMU correctly replace the phandle in partial device tree with virtual IOMMU phandle when "iommus" property is set. Signed-off-by: Rahul Singh --- xen/arch/arm/domain_build.c | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 7cd99a6771..afb3e76409 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -3235,7 +3235,35 @@ static int __init handle_prop_pfdt(struct kernel_info *kinfo, return ( propoff != -FDT_ERR_NOTFOUND ) ? propoff : 0; } -static int __init scan_pfdt_node(struct kernel_info *kinfo, const void *pfdt, +static void modify_pfdt_node(void *pfdt, int nodeoff) +{ + int proplen, i, rc; + const fdt32_t *prop; + fdt32_t *prop_c; + + prop = fdt_getprop(pfdt, nodeoff, "iommus", &proplen); + if ( !prop ) + return; + + prop_c = xzalloc_bytes(proplen); + + for ( i = 0; i < proplen / 8; ++i ) + { + prop_c[i * 2] = cpu_to_fdt32(GUEST_PHANDLE_VSMMUV3); + prop_c[i * 2 + 1] = prop[i * 2 + 1]; + } + + rc = fdt_setprop(pfdt, nodeoff, "iommus", prop_c, proplen); + if ( rc ) + { + dprintk(XENLOG_ERR, "Can't set the iommus property in partial FDT"); + return; + } + + return; +} + +static int __init scan_pfdt_node(struct kernel_info *kinfo, void *pfdt, int nodeoff, uint32_t address_cells, uint32_t size_cells, bool scan_passthrough_prop) @@ -3261,6 +3289,7 @@ static int __init scan_pfdt_node(struct kernel_info *kinfo, const void *pfdt, node_next = fdt_first_subnode(pfdt, nodeoff); while ( node_next > 0 ) { + modify_pfdt_node(pfdt, node_next); scan_pfdt_node(kinfo, pfdt, node_next, address_cells, size_cells, scan_passthrough_prop); node_next = fdt_next_subnode(pfdt, node_next);