Message ID | 20250227144150.1667735-4-suzuki.poulose@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: realm: Fix DMA address for devices | expand |
On Thu, Feb 27, 2025 at 02:41:50PM +0000, Suzuki K Poulose wrote: > When a device performs DMA to a shared buffer using physical addresses, > (without Stage1 translation), the device must use the "{I}PA address" with the > top bit set in Realm. This is to make sure that a trusted device will be able > to write to shared buffers as well as the protected buffers. Thus, a Realm must > always program the full address including the "protection" bit, like AMD SME > encryption bits. > > Enable this by providing arm64 specific dma_addr_{encrypted, canonical} > helpers for Realms. Please note that the VMM needs to similarly make sure that > the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the > unprotected alias. > > Cc: Will Deacon <will@kernel.org> > Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Cc: Steven Price <steven.price@arm.com> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > Cc: Tom Lendacky <thomas.lendacky@amd.com> > Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> > Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> In case this goes in via the DMA API tree: Acked-by: Catalin Marinas <catalin.marinas@arm.com> (we could bikeshed on the names like unencrypted vs decrypted but I'm not fussed about)
On 27/02/2025 16:05, Catalin Marinas wrote: > On Thu, Feb 27, 2025 at 02:41:50PM +0000, Suzuki K Poulose wrote: >> When a device performs DMA to a shared buffer using physical addresses, >> (without Stage1 translation), the device must use the "{I}PA address" with the >> top bit set in Realm. This is to make sure that a trusted device will be able >> to write to shared buffers as well as the protected buffers. Thus, a Realm must >> always program the full address including the "protection" bit, like AMD SME >> encryption bits. >> >> Enable this by providing arm64 specific dma_addr_{encrypted, canonical} >> helpers for Realms. Please note that the VMM needs to similarly make sure that >> the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the >> unprotected alias. >> >> Cc: Will Deacon <will@kernel.org> >> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> >> Cc: Catalin Marinas <catalin.marinas@arm.com> >> Cc: Robin Murphy <robin.murphy@arm.com> >> Cc: Steven Price <steven.price@arm.com> >> Cc: Christoph Hellwig <hch@lst.de> >> Cc: Marek Szyprowski <m.szyprowski@samsung.com> >> Cc: Tom Lendacky <thomas.lendacky@amd.com> >> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> >> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> > > In case this goes in via the DMA API tree: > > Acked-by: Catalin Marinas <catalin.marinas@arm.com> Thanks Catalin. > > (we could bikeshed on the names like unencrypted vs decrypted but I'm > not fussed about) It was initially decrypted, but Robin suggested that the DMA layer already uses "encrypted" and "unencrypted" (e.g., force_dma_unencrypted(), phys_to_dma_unencrypted() etc) Cheers Suzuki
On 2/28/25 12:41 AM, Suzuki K Poulose wrote: > When a device performs DMA to a shared buffer using physical addresses, > (without Stage1 translation), the device must use the "{I}PA address" with the > top bit set in Realm. This is to make sure that a trusted device will be able > to write to shared buffers as well as the protected buffers. Thus, a Realm must > always program the full address including the "protection" bit, like AMD SME > encryption bits. > > Enable this by providing arm64 specific dma_addr_{encrypted, canonical} > helpers for Realms. Please note that the VMM needs to similarly make sure that > the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the > unprotected alias. > > Cc: Will Deacon <will@kernel.org> > Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Cc: Steven Price <steven.price@arm.com> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > Cc: Tom Lendacky <thomas.lendacky@amd.com> > Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> > Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> > --- > Changes since v2: > - Drop dma_addr_encrypted() helper, which is a NOP for CCA ( Aneesh ) > - Only mask the "top" IPA bit and not all the bits beyond top bit. ( Robin ) > - Use PROT_NS_SHARED, now that we only set/clear top bit. (Gavin) > --- > arch/arm64/include/asm/mem_encrypt.h | 11 +++++++++++ > 1 file changed, 11 insertions(+) > Reviewed-by: Gavin Shan <gshan@redhat.com>
On 27/02/2025 2:41 pm, Suzuki K Poulose wrote: > When a device performs DMA to a shared buffer using physical addresses, > (without Stage1 translation), the device must use the "{I}PA address" with the > top bit set in Realm. This is to make sure that a trusted device will be able > to write to shared buffers as well as the protected buffers. Thus, a Realm must > always program the full address including the "protection" bit, like AMD SME > encryption bits. > > Enable this by providing arm64 specific dma_addr_{encrypted, canonical} > helpers for Realms. Please note that the VMM needs to similarly make sure that > the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the > unprotected alias. FWIW, Reviewed-by: Robin Murphy <robin.murphy@arm.com> > Cc: Will Deacon <will@kernel.org> > Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> > Cc: Catalin Marinas <catalin.marinas@arm.com> > Cc: Robin Murphy <robin.murphy@arm.com> > Cc: Steven Price <steven.price@arm.com> > Cc: Christoph Hellwig <hch@lst.de> > Cc: Marek Szyprowski <m.szyprowski@samsung.com> > Cc: Tom Lendacky <thomas.lendacky@amd.com> > Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> > Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> > --- > Changes since v2: > - Drop dma_addr_encrypted() helper, which is a NOP for CCA ( Aneesh ) > - Only mask the "top" IPA bit and not all the bits beyond top bit. ( Robin ) > - Use PROT_NS_SHARED, now that we only set/clear top bit. (Gavin) > --- > arch/arm64/include/asm/mem_encrypt.h | 11 +++++++++++ > 1 file changed, 11 insertions(+) > > diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h > index f8f78f622dd2..a2a1eeb36d4b 100644 > --- a/arch/arm64/include/asm/mem_encrypt.h > +++ b/arch/arm64/include/asm/mem_encrypt.h > @@ -21,4 +21,15 @@ static inline bool force_dma_unencrypted(struct device *dev) > return is_realm_world(); > } > > +/* > + * For Arm CCA guests, canonical addresses are "encrypted", so no changes > + * required for dma_addr_encrypted(). > + * The unencrypted DMA buffers must be accessed via the unprotected IPA, > + * "top IPA bit" set. > + */ > +#define dma_addr_unencrypted(x) ((x) | PROT_NS_SHARED) > + > +/* Clear the "top" IPA bit while converting back */ > +#define dma_addr_canonical(x) ((x) & ~PROT_NS_SHARED) > + > #endif /* __ASM_MEM_ENCRYPT_H */
diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h index f8f78f622dd2..a2a1eeb36d4b 100644 --- a/arch/arm64/include/asm/mem_encrypt.h +++ b/arch/arm64/include/asm/mem_encrypt.h @@ -21,4 +21,15 @@ static inline bool force_dma_unencrypted(struct device *dev) return is_realm_world(); } +/* + * For Arm CCA guests, canonical addresses are "encrypted", so no changes + * required for dma_addr_encrypted(). + * The unencrypted DMA buffers must be accessed via the unprotected IPA, + * "top IPA bit" set. + */ +#define dma_addr_unencrypted(x) ((x) | PROT_NS_SHARED) + +/* Clear the "top" IPA bit while converting back */ +#define dma_addr_canonical(x) ((x) & ~PROT_NS_SHARED) + #endif /* __ASM_MEM_ENCRYPT_H */
When a device performs DMA to a shared buffer using physical addresses, (without Stage1 translation), the device must use the "{I}PA address" with the top bit set in Realm. This is to make sure that a trusted device will be able to write to shared buffers as well as the protected buffers. Thus, a Realm must always program the full address including the "protection" bit, like AMD SME encryption bits. Enable this by providing arm64 specific dma_addr_{encrypted, canonical} helpers for Realms. Please note that the VMM needs to similarly make sure that the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the unprotected alias. Cc: Will Deacon <will@kernel.org> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Steven Price <steven.price@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> --- Changes since v2: - Drop dma_addr_encrypted() helper, which is a NOP for CCA ( Aneesh ) - Only mask the "top" IPA bit and not all the bits beyond top bit. ( Robin ) - Use PROT_NS_SHARED, now that we only set/clear top bit. (Gavin) --- arch/arm64/include/asm/mem_encrypt.h | 11 +++++++++++ 1 file changed, 11 insertions(+)