Message ID | 20210518064215.2856977-15-tientzu@chromium.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Restricted DMA | expand |
Hi Claire, On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote: > Introduce the new compatible string, restricted-dma-pool, for restricted > DMA. One can specify the address and length of the restricted DMA memory > region by restricted-dma-pool in the reserved-memory node. > > Signed-off-by: Claire Chang <tientzu@chromium.org> > --- > .../reserved-memory/reserved-memory.txt | 27 +++++++++++++++++++ > 1 file changed, 27 insertions(+) > > diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt > index e8d3096d922c..284aea659015 100644 > --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt > +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt > @@ -51,6 +51,23 @@ compatible (optional) - standard definition > used as a shared pool of DMA buffers for a set of devices. It can > be used by an operating system to instantiate the necessary pool > management subsystem if necessary. > + - restricted-dma-pool: This indicates a region of memory meant to be > + used as a pool of restricted DMA buffers for a set of devices. The > + memory region would be the only region accessible to those devices. > + When using this, the no-map and reusable properties must not be set, > + so the operating system can create a virtual mapping that will be used > + for synchronization. The main purpose for restricted DMA is to > + mitigate the lack of DMA access control on systems without an IOMMU, > + which could result in the DMA accessing the system memory at > + unexpected times and/or unexpected addresses, possibly leading to data > + leakage or corruption. The feature on its own provides a basic level > + of protection against the DMA overwriting buffer contents at > + unexpected times. However, to protect against general data leakage and > + system memory corruption, the system needs to provide way to lock down > + the memory access, e.g., MPU. Note that since coherent allocation > + needs remapping, one must set up another device coherent pool by > + shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic > + coherent allocation. > - vendor specific string in the form <vendor>,[<device>-]<usage> > no-map (optional) - empty property > - Indicates the operating system must not create a virtual mapping > @@ -120,6 +137,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). > compatible = "acme,multimedia-memory"; > reg = <0x77000000 0x4000000>; > }; > + > + restricted_dma_mem_reserved: restricted_dma_mem_reserved { > + compatible = "restricted-dma-pool"; > + reg = <0x50000000 0x400000>; > + }; nit: You need to update the old text that states "This example defines 3 contiguous regions ...". > }; > > /* ... */ > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). > memory-region = <&multimedia_reserved>; > /* ... */ > }; > + > + pcie_device: pcie_device@0,0 { > + memory-region = <&restricted_dma_mem_reserved>; > + /* ... */ > + }; I still don't understand how this works for individual PCIe devices -- how is dev->of_node set to point at the node you have above? I tried adding the memory-region to the host controller instead, and then I see it crop up in dmesg: | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and so the restricted DMA area is not used. In fact, swiotlb isn't used at all. What am I missing to make this work with PCIe devices? Thanks, Will
On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote: > On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote: > > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). > > memory-region = <&multimedia_reserved>; > > /* ... */ > > }; > > + > > + pcie_device: pcie_device@0,0 { > > + memory-region = <&restricted_dma_mem_reserved>; > > + /* ... */ > > + }; > > I still don't understand how this works for individual PCIe devices -- how > is dev->of_node set to point at the node you have above? > > I tried adding the memory-region to the host controller instead, and then > I see it crop up in dmesg: > > | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved > > but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and > so the restricted DMA area is not used. In fact, swiotlb isn't used at all. > > What am I missing to make this work with PCIe devices? Aha, looks like we're just missing the logic to inherit the DMA configuration. The diff below gets things working for me. Will --->8 diff --git a/drivers/of/address.c b/drivers/of/address.c index c562a9ff5f0b..bf499fdd6e93 100644 --- a/drivers/of/address.c +++ b/drivers/of/address.c @@ -1113,25 +1113,25 @@ bool of_dma_is_coherent(struct device_node *np) } EXPORT_SYMBOL_GPL(of_dma_is_coherent); -int of_dma_set_restricted_buffer(struct device *dev) +int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np) { - struct device_node *node; int count, i; - if (!dev->of_node) + if (!np) return 0; - count = of_property_count_elems_of_size(dev->of_node, "memory-region", + count = of_property_count_elems_of_size(np, "memory-region", sizeof(phandle)); for (i = 0; i < count; i++) { - node = of_parse_phandle(dev->of_node, "memory-region", i); + struct device_node *node; + + node = of_parse_phandle(np, "memory-region", i); /* There might be multiple memory regions, but only one - * restriced-dma-pool region is allowed. + * restricted-dma-pool region is allowed. */ if (of_device_is_compatible(node, "restricted-dma-pool") && of_device_is_available(node)) - return of_reserved_mem_device_init_by_idx( - dev, dev->of_node, i); + return of_reserved_mem_device_init_by_idx(dev, np, i); } return 0; diff --git a/drivers/of/device.c b/drivers/of/device.c index d8d865223e51..2defdca418ec 100644 --- a/drivers/of/device.c +++ b/drivers/of/device.c @@ -166,7 +166,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np, arch_setup_dma_ops(dev, dma_start, size, iommu, coherent); if (!iommu) - return of_dma_set_restricted_buffer(dev); + return of_dma_set_restricted_buffer(dev, np); return 0; } diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h index 9fc874548528..8fde97565d11 100644 --- a/drivers/of/of_private.h +++ b/drivers/of/of_private.h @@ -163,14 +163,15 @@ struct bus_dma_region; #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA) int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map); -int of_dma_set_restricted_buffer(struct device *dev); +int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np); #else static inline int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map) { return -ENODEV; } -static inline int of_dma_set_restricted_buffer(struct device *dev) +static inline int of_dma_set_restricted_buffer(struct device *dev, + struct device_node *np) { return -ENODEV; }
On Wed, May 26, 2021 at 11:53 PM Will Deacon <will@kernel.org> wrote: > > On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote: > > On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote: > > > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). > > > memory-region = <&multimedia_reserved>; > > > /* ... */ > > > }; > > > + > > > + pcie_device: pcie_device@0,0 { > > > + memory-region = <&restricted_dma_mem_reserved>; > > > + /* ... */ > > > + }; > > > > I still don't understand how this works for individual PCIe devices -- how > > is dev->of_node set to point at the node you have above? > > > > I tried adding the memory-region to the host controller instead, and then > > I see it crop up in dmesg: > > > > | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved > > > > but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and > > so the restricted DMA area is not used. In fact, swiotlb isn't used at all. > > > > What am I missing to make this work with PCIe devices? > > Aha, looks like we're just missing the logic to inherit the DMA > configuration. The diff below gets things working for me. I guess what was missing is the reg property in the pcie_device node. Will update the example dts.
On Thu, May 27, 2021 at 07:29:20PM +0800, Claire Chang wrote: > On Wed, May 26, 2021 at 11:53 PM Will Deacon <will@kernel.org> wrote: > > > > On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote: > > > On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote: > > > > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). > > > > memory-region = <&multimedia_reserved>; > > > > /* ... */ > > > > }; > > > > + > > > > + pcie_device: pcie_device@0,0 { > > > > + memory-region = <&restricted_dma_mem_reserved>; > > > > + /* ... */ > > > > + }; > > > > > > I still don't understand how this works for individual PCIe devices -- how > > > is dev->of_node set to point at the node you have above? > > > > > > I tried adding the memory-region to the host controller instead, and then > > > I see it crop up in dmesg: > > > > > > | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved > > > > > > but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and > > > so the restricted DMA area is not used. In fact, swiotlb isn't used at all. > > > > > > What am I missing to make this work with PCIe devices? > > > > Aha, looks like we're just missing the logic to inherit the DMA > > configuration. The diff below gets things working for me. > > I guess what was missing is the reg property in the pcie_device node. > Will update the example dts. Thanks. I still think something like my diff makes sense, if you wouldn't mind including it, as it allows restricted DMA to be used for situations where the PCIe topology is not static. Perhaps we should prefer dev->of_node if it exists, but then use the node of the host bridge's parent node otherwise? Will
On Thu, May 27, 2021 at 7:35 PM Will Deacon <will@kernel.org> wrote: > > On Thu, May 27, 2021 at 07:29:20PM +0800, Claire Chang wrote: > > On Wed, May 26, 2021 at 11:53 PM Will Deacon <will@kernel.org> wrote: > > > > > > On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote: > > > > On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote: > > > > > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). > > > > > memory-region = <&multimedia_reserved>; > > > > > /* ... */ > > > > > }; > > > > > + > > > > > + pcie_device: pcie_device@0,0 { > > > > > + memory-region = <&restricted_dma_mem_reserved>; > > > > > + /* ... */ > > > > > + }; > > > > > > > > I still don't understand how this works for individual PCIe devices -- how > > > > is dev->of_node set to point at the node you have above? > > > > > > > > I tried adding the memory-region to the host controller instead, and then > > > > I see it crop up in dmesg: > > > > > > > > | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved > > > > > > > > but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and > > > > so the restricted DMA area is not used. In fact, swiotlb isn't used at all. > > > > > > > > What am I missing to make this work with PCIe devices? > > > > > > Aha, looks like we're just missing the logic to inherit the DMA > > > configuration. The diff below gets things working for me. > > > > I guess what was missing is the reg property in the pcie_device node. > > Will update the example dts. > > Thanks. I still think something like my diff makes sense, if you wouldn't mind including > it, as it allows restricted DMA to be used for situations where the PCIe > topology is not static. > > Perhaps we should prefer dev->of_node if it exists, but then use the node > of the host bridge's parent node otherwise? Sure. Let me add in the next version. > > Will
On Thu, May 27, 2021 at 08:48:59PM +0800, Claire Chang wrote: > On Thu, May 27, 2021 at 7:35 PM Will Deacon <will@kernel.org> wrote: > > > > On Thu, May 27, 2021 at 07:29:20PM +0800, Claire Chang wrote: > > > On Wed, May 26, 2021 at 11:53 PM Will Deacon <will@kernel.org> wrote: > > > > > > > > On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote: > > > > > On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote: > > > > > > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). > > > > > > memory-region = <&multimedia_reserved>; > > > > > > /* ... */ > > > > > > }; > > > > > > + > > > > > > + pcie_device: pcie_device@0,0 { > > > > > > + memory-region = <&restricted_dma_mem_reserved>; > > > > > > + /* ... */ > > > > > > + }; > > > > > > > > > > I still don't understand how this works for individual PCIe devices -- how > > > > > is dev->of_node set to point at the node you have above? > > > > > > > > > > I tried adding the memory-region to the host controller instead, and then > > > > > I see it crop up in dmesg: > > > > > > > > > > | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved > > > > > > > > > > but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and > > > > > so the restricted DMA area is not used. In fact, swiotlb isn't used at all. > > > > > > > > > > What am I missing to make this work with PCIe devices? > > > > > > > > Aha, looks like we're just missing the logic to inherit the DMA > > > > configuration. The diff below gets things working for me. > > > > > > I guess what was missing is the reg property in the pcie_device node. > > > Will update the example dts. > > > > Thanks. I still think something like my diff makes sense, if you wouldn't mind including > > it, as it allows restricted DMA to be used for situations where the PCIe > > topology is not static. > > > > Perhaps we should prefer dev->of_node if it exists, but then use the node > > of the host bridge's parent node otherwise? > > Sure. Let me add in the next version. Brill, thanks! I'll take it for a spin once it lands on the list. Will
diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt index e8d3096d922c..284aea659015 100644 --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt @@ -51,6 +51,23 @@ compatible (optional) - standard definition used as a shared pool of DMA buffers for a set of devices. It can be used by an operating system to instantiate the necessary pool management subsystem if necessary. + - restricted-dma-pool: This indicates a region of memory meant to be + used as a pool of restricted DMA buffers for a set of devices. The + memory region would be the only region accessible to those devices. + When using this, the no-map and reusable properties must not be set, + so the operating system can create a virtual mapping that will be used + for synchronization. The main purpose for restricted DMA is to + mitigate the lack of DMA access control on systems without an IOMMU, + which could result in the DMA accessing the system memory at + unexpected times and/or unexpected addresses, possibly leading to data + leakage or corruption. The feature on its own provides a basic level + of protection against the DMA overwriting buffer contents at + unexpected times. However, to protect against general data leakage and + system memory corruption, the system needs to provide way to lock down + the memory access, e.g., MPU. Note that since coherent allocation + needs remapping, one must set up another device coherent pool by + shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic + coherent allocation. - vendor specific string in the form <vendor>,[<device>-]<usage> no-map (optional) - empty property - Indicates the operating system must not create a virtual mapping @@ -120,6 +137,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). compatible = "acme,multimedia-memory"; reg = <0x77000000 0x4000000>; }; + + restricted_dma_mem_reserved: restricted_dma_mem_reserved { + compatible = "restricted-dma-pool"; + reg = <0x50000000 0x400000>; + }; }; /* ... */ @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB). memory-region = <&multimedia_reserved>; /* ... */ }; + + pcie_device: pcie_device@0,0 { + memory-region = <&restricted_dma_mem_reserved>; + /* ... */ + }; };
Introduce the new compatible string, restricted-dma-pool, for restricted DMA. One can specify the address and length of the restricted DMA memory region by restricted-dma-pool in the reserved-memory node. Signed-off-by: Claire Chang <tientzu@chromium.org> --- .../reserved-memory/reserved-memory.txt | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+)