diff mbox series

[v2] iommu/arm-smmu-v3: Add a user-configurable tlb_invalidate_threshold

Message ID 20230816204350.29150-1-nicolinc@nvidia.com (mailing list archive)
State New, archived
Headers show
Series [v2] iommu/arm-smmu-v3: Add a user-configurable tlb_invalidate_threshold | expand

Commit Message

Nicolin Chen Aug. 16, 2023, 8:43 p.m. UTC
When receiving an __arm_smmu_tlb_inv_range() call with a large size, there
could be a long latency at this function call: one part is coming from a
large software overhead in the routine of building commands, and the other
part is coming from CMDQ hardware consuming the large number of commands.
This latency could be significantly large on an SMMU that does not support
range invalidation commands, i.e. no ARM_SMMU_FEAT_RANGE_INV.

One way to optimize this is to replace a large number of VA invalidation
commands with one single per-asid invalidation command, when the requested
size reaches a threshold. This threshold can be configurable depending on
the SMMU implementaion.

Add a tlb_invalidate_threshold threshold per SMMU device, and allow it to
be configured via an SYSFS node. Set its default value next to the highest
address SMMU translates, meaning by default there would be no replacement
until a user sets the threshold. Also, add an ABI doc for this new node
with a "%llu" format so the threshold can be 4GB size on a 32-bit kernel.

Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---

Changelog
v2:
 * Renamed the sysfs node and changed it to represent size in bytes
 * Set the upper/lower values of the threshold to be more related to SMMU
 * Revised the ABI description accordingly and added upper/lower values
 * Dropped the "arm-smmu-v3" subdirectory for the new sysfs node
 * Some minor coding style and doc/comments updating
v1: https://lore.kernel.org/all/20230814215701.5455-1-nicolinc@nvidia.com/

 .../ABI/testing/sysfs-class-iommu-arm-smmu-v3 | 13 +++++
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   | 54 ++++++++++++++++++-
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h   |  3 ++
 3 files changed, 69 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/ABI/testing/sysfs-class-iommu-arm-smmu-v3

Comments

Nicolin Chen Aug. 17, 2023, 6:36 p.m. UTC | #1
On Wed, Aug 16, 2023 at 01:43:50PM -0700, Nicolin Chen wrote:
 
> When receiving an __arm_smmu_tlb_inv_range() call with a large size, there
> could be a long latency at this function call: one part is coming from a
> large software overhead in the routine of building commands, and the other
> part is coming from CMDQ hardware consuming the large number of commands.
> This latency could be significantly large on an SMMU that does not support
> range invalidation commands, i.e. no ARM_SMMU_FEAT_RANGE_INV.
> 
> One way to optimize this is to replace a large number of VA invalidation
> commands with one single per-asid invalidation command, when the requested
> size reaches a threshold. This threshold can be configurable depending on
> the SMMU implementaion.

I'm rethinking about this size-based threshold, since what really
affects the latency is the number of the invalidation commands in
the request. So having an npages-based threshold might be optimal,
though the idea and implementation would be similar.

Thanks
Nicolin
Will Deacon Aug. 18, 2023, 4:11 p.m. UTC | #2
On Thu, Aug 17, 2023 at 11:36:18AM -0700, Nicolin Chen wrote:
> On Wed, Aug 16, 2023 at 01:43:50PM -0700, Nicolin Chen wrote:
>  
> > When receiving an __arm_smmu_tlb_inv_range() call with a large size, there
> > could be a long latency at this function call: one part is coming from a
> > large software overhead in the routine of building commands, and the other
> > part is coming from CMDQ hardware consuming the large number of commands.
> > This latency could be significantly large on an SMMU that does not support
> > range invalidation commands, i.e. no ARM_SMMU_FEAT_RANGE_INV.
> > 
> > One way to optimize this is to replace a large number of VA invalidation
> > commands with one single per-asid invalidation command, when the requested
> > size reaches a threshold. This threshold can be configurable depending on
> > the SMMU implementaion.
> 
> I'm rethinking about this size-based threshold, since what really
> affects the latency is the number of the invalidation commands in
> the request. So having an npages-based threshold might be optimal,
> though the idea and implementation would be similar.

On the CPU side, we just have:

#define MAX_TLBI_OPS    PTRS_PER_PTE

in asm/tlbflush.h

Can we start off with something similar for the SMMU? I'm not massively
keen on exposing this as a knob to userspace, because I don't think most
people will have a clue about how to tune it.

Will
Nicolin Chen Aug. 18, 2023, 5:18 p.m. UTC | #3
On Fri, Aug 18, 2023 at 05:11:19PM +0100, Will Deacon wrote:
 
> On Thu, Aug 17, 2023 at 11:36:18AM -0700, Nicolin Chen wrote:
> > On Wed, Aug 16, 2023 at 01:43:50PM -0700, Nicolin Chen wrote:
> >
> > > When receiving an __arm_smmu_tlb_inv_range() call with a large size, there
> > > could be a long latency at this function call: one part is coming from a
> > > large software overhead in the routine of building commands, and the other
> > > part is coming from CMDQ hardware consuming the large number of commands.
> > > This latency could be significantly large on an SMMU that does not support
> > > range invalidation commands, i.e. no ARM_SMMU_FEAT_RANGE_INV.
> > >
> > > One way to optimize this is to replace a large number of VA invalidation
> > > commands with one single per-asid invalidation command, when the requested
> > > size reaches a threshold. This threshold can be configurable depending on
> > > the SMMU implementaion.
> >
> > I'm rethinking about this size-based threshold, since what really
> > affects the latency is the number of the invalidation commands in
> > the request. So having an npages-based threshold might be optimal,
> > though the idea and implementation would be similar.
> 
> On the CPU side, we just have:
> 
> #define MAX_TLBI_OPS    PTRS_PER_PTE
> 
> in asm/tlbflush.h
> 
> Can we start off with something similar for the SMMU? I'm not massively
> keen on exposing this as a knob to userspace, because I don't think most
> people will have a clue about how to tune it.

Yes! I was hesitating about an arbitrary threshold setup that actually
fits our situation better. Now it makes sense. Thanks for the input!

What I would do was to pick a number based on our test results. Yet, it
seems that 1024 was chosen at the beginning to fix a softlock bug, and
it changed to PTRS_PER_PTE for a better perf, IIUIC. Should we use the
similar setup for SMMU? I found it is stored in data->bits_per_level,
so perhaps pass it back to the driver via "struct io_pgtable_cfg".

Nicolin
diff mbox series

Patch

diff --git a/Documentation/ABI/testing/sysfs-class-iommu-arm-smmu-v3 b/Documentation/ABI/testing/sysfs-class-iommu-arm-smmu-v3
new file mode 100644
index 000000000000..2509bc90487c
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-class-iommu-arm-smmu-v3
@@ -0,0 +1,13 @@ 
+What:		/sys/class/iommu/<iommu>/tlb_invalidate_threshold
+Date:		August 2023
+KernelVersion:	6.6
+Contact:	Nicolin Chen <nicolinc@nvidia.com>
+Description:
+		Size threshold in bytes for SMMU to replace VA TLB-invalidation
+		commands with one single per-asid TLB-invalidation command. Its
+		default value is set next to the highest address that the SMMU
+		translates, so there will be no replacement of TLB-invalidation
+		commands by default. It is also the maximum value the threshold
+		can be set to. Its minimum value is the smallest page size SMMU
+		supports.
+		Format: %llu
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 9b0dc3505601..96cc2f4d8a2e 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1891,6 +1891,14 @@  static void __arm_smmu_tlb_inv_range(struct arm_smmu_cmdq_ent *cmd,
 	if (!size)
 		return;
 
+	/*
+	 * Convert a large range/number of VA invalidation(s) to one single ASID
+	 * invalidation when the input size reaches the threshold. It simplifies
+	 * the command issuing, especially on SMMU w/o ARM_SMMU_FEAT_RANGE_INV.
+	 */
+	if (cmd->tlbi.asid && size >= smmu->tlb_invalidate_threshold)
+		return arm_smmu_tlb_inv_asid(smmu, cmd->tlbi.asid);
+
 	if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) {
 		/* Get the leaf page size */
 		tg = __ffs(smmu_domain->domain.pgsize_bitmap);
@@ -3683,6 +3691,7 @@  static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 			 "failed to set DMA mask for table walker\n");
 
 	smmu->ias = max(smmu->ias, smmu->oas);
+	smmu->tlb_invalidate_threshold = 1UL << smmu->ias;
 
 	if ((smmu->features & ARM_SMMU_FEAT_TRANS_S1) &&
 	    (smmu->features & ARM_SMMU_FEAT_TRANS_S2))
@@ -3808,6 +3817,49 @@  static void arm_smmu_rmr_install_bypass_ste(struct arm_smmu_device *smmu)
 	iort_put_rmr_sids(dev_fwnode(smmu->dev), &rmr_list);
 }
 
+static ssize_t tlb_invalidate_threshold_show(struct device *dev,
+					     struct device_attribute *attr,
+					     char *buf)
+{
+	struct arm_smmu_device *smmu = dev_get_drvdata(dev->parent);
+
+	return sysfs_emit(buf, "%llu\n", smmu->tlb_invalidate_threshold);
+}
+
+static ssize_t tlb_invalidate_threshold_store(struct device *dev,
+					      struct device_attribute *attr,
+					      const char *buf, size_t size)
+{
+	struct arm_smmu_device *smmu = dev_get_drvdata(dev->parent);
+	unsigned long long tlb_invalidate_threshold;
+	int ret;
+
+	ret = kstrtoull(buf, 0, &tlb_invalidate_threshold);
+	if (ret)
+		return ret;
+	if (tlb_invalidate_threshold > 1UL << smmu->ias ||
+	    tlb_invalidate_threshold < 1 << __ffs(smmu->pgsize_bitmap))
+		return -EINVAL;
+	smmu->tlb_invalidate_threshold = tlb_invalidate_threshold;
+	return size;
+}
+
+static DEVICE_ATTR_RW(tlb_invalidate_threshold);
+
+static struct attribute *arm_smmu_attrs[] = {
+	&dev_attr_tlb_invalidate_threshold.attr,
+	NULL,
+};
+
+static struct attribute_group arm_smmu_group = {
+	.attrs = arm_smmu_attrs,
+};
+
+static const struct attribute_group *arm_smmu_groups[] = {
+	&arm_smmu_group,
+	NULL,
+};
+
 static int arm_smmu_device_probe(struct platform_device *pdev)
 {
 	int irq, ret;
@@ -3900,7 +3952,7 @@  static int arm_smmu_device_probe(struct platform_device *pdev)
 		return ret;
 
 	/* And we're up. Go go go! */
-	ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
+	ret = iommu_device_sysfs_add(&smmu->iommu, dev, arm_smmu_groups,
 				     "smmu3.%pa", &ioaddr);
 	if (ret)
 		return ret;
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
index dcab85698a4e..9af8d543aab0 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h
@@ -665,6 +665,9 @@  struct arm_smmu_device {
 	unsigned long			oas; /* PA */
 	unsigned long			pgsize_bitmap;
 
+	/* Size threshold in bytes to convert VA range TLBI to asid TLBI */
+	unsigned long long		tlb_invalidate_threshold;
+
 #define ARM_SMMU_MAX_ASIDS		(1 << 16)
 	unsigned int			asid_bits;