diff mbox series

[v2,3/3] arm64: realm: Use aliased addresses for device DMA to shared buffers

Message ID 20250219220751.1276854-4-suzuki.poulose@arm.com (mailing list archive)
State New
Headers show
Series arm64: realm: Fix DMA address for devices | expand

Commit Message

Suzuki K Poulose Feb. 19, 2025, 10:07 p.m. UTC
When a device performs DMA to a shared buffer using physical addresses,
(without Stage1 translation), the device must use the "{I}PA address" with the
top bit set in Realm. This is to make sure that a trusted device will be able
to write to shared buffers as well as the protected buffers. Thus, a Realm must
always program the full address including the "protection" bit, like AMD SME
encryption bits.

Enable this by providing arm64 specific dma_{encrypted,decrypted,clear_encryption}
helpers for Realms. Please note that the VMM needs to similarly make sure that
the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the
unprotected alias.

Cc: Will Deacon <will@kernel.org>
Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
 arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

Comments

Gavin Shan Feb. 25, 2025, 5:24 a.m. UTC | #1
Hi Suzuki,

On 2/20/25 8:07 AM, Suzuki K Poulose wrote:
> When a device performs DMA to a shared buffer using physical addresses,
> (without Stage1 translation), the device must use the "{I}PA address" with the
> top bit set in Realm. This is to make sure that a trusted device will be able
> to write to shared buffers as well as the protected buffers. Thus, a Realm must
> always program the full address including the "protection" bit, like AMD SME
> encryption bits.
> 
> Enable this by providing arm64 specific dma_{encrypted,decrypted,clear_encryption}
> helpers for Realms. Please note that the VMM needs to similarly make sure that
> the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the
> unprotected alias.
> 
> Cc: Will Deacon <will@kernel.org>
> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>   arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
>   1 file changed, 22 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h
> index f8f78f622dd2..aeda3bba255e 100644
> --- a/arch/arm64/include/asm/mem_encrypt.h
> +++ b/arch/arm64/include/asm/mem_encrypt.h
> @@ -21,4 +21,26 @@ static inline bool force_dma_unencrypted(struct device *dev)
>   	return is_realm_world();
>   }
>   
> +static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
> +{
> +	if (is_realm_world())
> +		daddr |= prot_ns_shared;
> +	return daddr;
> +}
> +#define dma_decrypted dma_decrypted
> +

There is an existing macro (PROT_NS_SHARED), which is preferred to return
prot_ns_shared or 0 depending on the availability of the realm capability.
However, that macro needs to be improved a bit so that it can be used here.
We need to return 0UL to match with the type of prot_ns_shared (unsigned long)

-#define PROT_NS_SHARED         (is_realm_world() ? prot_ns_shared : 0)
+#define PROT_NS_SHARED         (is_realm_world() ? prot_ns_shared : 0UL)

After that, the chunk of code can be as below.

	return daddr | PROT_NS_SHARED;

> +static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
> +{
> +	if (is_realm_world())
> +		daddr &= prot_ns_shared - 1;
> +	return daddr;
> +}
> +#define dma_encrypted dma_encrypted
> +

With PROT_NS_SHARED, it can become something like below. (PROT_NS_SHARED - 1)
is equivalent to -1UL, 'daddr & -1UL' should be fine since it does nothing.

	return daddr & (PROT_NS_SHARED - 1);

> +static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
> +{
> +	return dma_encrypted(daddr);
> +}
> +#define dma_clear_encryption dma_clear_encryption
> +
>   #endif	/* __ASM_MEM_ENCRYPT_H */

Thanks,
Gavin
Gavin Shan Feb. 25, 2025, 5:28 a.m. UTC | #2
On 2/25/25 3:24 PM, Gavin Shan wrote: 
> On 2/20/25 8:07 AM, Suzuki K Poulose wrote:
>> When a device performs DMA to a shared buffer using physical addresses,
>> (without Stage1 translation), the device must use the "{I}PA address" with the
>> top bit set in Realm. This is to make sure that a trusted device will be able
>> to write to shared buffers as well as the protected buffers. Thus, a Realm must
>> always program the full address including the "protection" bit, like AMD SME
>> encryption bits.
>>
>> Enable this by providing arm64 specific dma_{encrypted,decrypted,clear_encryption}
>> helpers for Realms. Please note that the VMM needs to similarly make sure that
>> the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the
>> unprotected alias.
>>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Robin Murphy <robin.murphy@arm.com>
>> Cc: Steven Price <steven.price@arm.com>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: Tom Lendacky <thomas.lendacky@amd.com>
>> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> ---
>>   arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
>>   1 file changed, 22 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h
>> index f8f78f622dd2..aeda3bba255e 100644
>> --- a/arch/arm64/include/asm/mem_encrypt.h
>> +++ b/arch/arm64/include/asm/mem_encrypt.h
>> @@ -21,4 +21,26 @@ static inline bool force_dma_unencrypted(struct device *dev)
>>       return is_realm_world();
>>   }
>> +static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
>> +{
>> +    if (is_realm_world())
>> +        daddr |= prot_ns_shared;
>> +    return daddr;
>> +}
>> +#define dma_decrypted dma_decrypted
>> +
> 
> There is an existing macro (PROT_NS_SHARED), which is preferred to return
> prot_ns_shared or 0 depending on the availability of the realm capability.
> However, that macro needs to be improved a bit so that it can be used here.
> We need to return 0UL to match with the type of prot_ns_shared (unsigned long)
> 
> -#define PROT_NS_SHARED         (is_realm_world() ? prot_ns_shared : 0)
> +#define PROT_NS_SHARED         (is_realm_world() ? prot_ns_shared : 0UL)
> 
> After that, the chunk of code can be as below.
> 
>      return daddr | PROT_NS_SHARED;
> 
>> +static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
>> +{
>> +    if (is_realm_world())
>> +        daddr &= prot_ns_shared - 1;
>> +    return daddr;
>> +}
>> +#define dma_encrypted dma_encrypted
>> +
> 
> With PROT_NS_SHARED, it can become something like below. (PROT_NS_SHARED - 1)
> is equivalent to -1UL, 'daddr & -1UL' should be fine since it does nothing.
> 

I meant (PROT_NS_SHARED - 1) is equivalent to -1UL when no realm capability
is around :)

>      return daddr & (PROT_NS_SHARED - 1);
> 
>> +static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
>> +{
>> +    return dma_encrypted(daddr);
>> +}
>> +#define dma_clear_encryption dma_clear_encryption
>> +
>>   #endif    /* __ASM_MEM_ENCRYPT_H */

Thanks,
Gavin
Robin Murphy Feb. 25, 2025, 1:04 p.m. UTC | #3
On 2025-02-19 10:07 pm, Suzuki K Poulose wrote:
> When a device performs DMA to a shared buffer using physical addresses,
> (without Stage1 translation), the device must use the "{I}PA address" with the
> top bit set in Realm. This is to make sure that a trusted device will be able
> to write to shared buffers as well as the protected buffers. Thus, a Realm must
> always program the full address including the "protection" bit, like AMD SME
> encryption bits.
> 
> Enable this by providing arm64 specific dma_{encrypted,decrypted,clear_encryption}
> helpers for Realms. Please note that the VMM needs to similarly make sure that
> the SMMU Stage2 in the Non-secure world is setup accordingly to map IPA at the
> unprotected alias.
> 
> Cc: Will Deacon <will@kernel.org>
> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
>   arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
>   1 file changed, 22 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h
> index f8f78f622dd2..aeda3bba255e 100644
> --- a/arch/arm64/include/asm/mem_encrypt.h
> +++ b/arch/arm64/include/asm/mem_encrypt.h
> @@ -21,4 +21,26 @@ static inline bool force_dma_unencrypted(struct device *dev)
>   	return is_realm_world();
>   }
>   
> +static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
> +{
> +	if (is_realm_world())
> +		daddr |= prot_ns_shared;
> +	return daddr;
> +}
> +#define dma_decrypted dma_decrypted
> +
> +static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
> +{
> +	if (is_realm_world())
> +		daddr &= prot_ns_shared - 1;

Nit: is there a reason this isn't the direct inverse of the other 
operation, i.e. "daddr &= ~prot_ns_shared"? If so, it might be worth 
dropping a comment why we're doing slightly unintuitive arithmetic on a 
pagetable attribute (and if not then maybe just do the more obvious 
thing). I doubt anyone's in a rush to support TBI for DMA, and this 
would be far from the only potential hiccup for that, but still... :)

Thanks,
Robin.

> +	return daddr;
> +}
> +#define dma_encrypted dma_encrypted
> +
> +static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
> +{
> +	return dma_encrypted(daddr);
> +}
> +#define dma_clear_encryption dma_clear_encryption
> +
>   #endif	/* __ASM_MEM_ENCRYPT_H */
Suzuki K Poulose Feb. 25, 2025, 4:14 p.m. UTC | #4
On 25/02/2025 13:04, Robin Murphy wrote:
> On 2025-02-19 10:07 pm, Suzuki K Poulose wrote:
>> When a device performs DMA to a shared buffer using physical addresses,
>> (without Stage1 translation), the device must use the "{I}PA address" 
>> with the
>> top bit set in Realm. This is to make sure that a trusted device will 
>> be able
>> to write to shared buffers as well as the protected buffers. Thus, a 
>> Realm must
>> always program the full address including the "protection" bit, like 
>> AMD SME
>> encryption bits.
>>
>> Enable this by providing arm64 specific 
>> dma_{encrypted,decrypted,clear_encryption}
>> helpers for Realms. Please note that the VMM needs to similarly make 
>> sure that
>> the SMMU Stage2 in the Non-secure world is setup accordingly to map 
>> IPA at the
>> unprotected alias.
>>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Robin Murphy <robin.murphy@arm.com>
>> Cc: Steven Price <steven.price@arm.com>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: Tom Lendacky <thomas.lendacky@amd.com>
>> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> ---
>>   arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
>>   1 file changed, 22 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/ 
>> include/asm/mem_encrypt.h
>> index f8f78f622dd2..aeda3bba255e 100644
>> --- a/arch/arm64/include/asm/mem_encrypt.h
>> +++ b/arch/arm64/include/asm/mem_encrypt.h
>> @@ -21,4 +21,26 @@ static inline bool force_dma_unencrypted(struct 
>> device *dev)
>>       return is_realm_world();
>>   }
>> +static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
>> +{
>> +    if (is_realm_world())
>> +        daddr |= prot_ns_shared;
>> +    return daddr;
>> +}
>> +#define dma_decrypted dma_decrypted
>> +
>> +static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
>> +{
>> +    if (is_realm_world())
>> +        daddr &= prot_ns_shared - 1;
> 
> Nit: is there a reason this isn't the direct inverse of the other 
> operation, i.e. "daddr &= ~prot_ns_shared"? If so, it might be worth 

It could be. The IPA size for the realm is split into half with the
lower half protected/encrypted and anything above that unprotected. 
Technically any addr >= prot_ns_shared is "unencrypted" (even though it
may be invalid, if >= BIT(IPA_Size) - 1), so to cover that, I masked
anything above the MS. But now when I think of it, it is much better to
trigger a Stage2 fault if the address is illegal (i.e., > BIT(IPA_Size) 
- 1) than corrupting some valid memory, by masking the top bits (beyond 
prot_ns_shared).

Cheers
Suzuki




> dropping a comment why we're doing slightly unintuitive arithmetic on a 
> pagetable attribute (and if not then maybe just do the more obvious 
> thing). I doubt anyone's in a rush to support TBI for DMA, and this 
> would be far from the only potential hiccup for that, but still... :)
> 
> Thanks,
> Robin.
> 
>> +    return daddr;
>> +}
>> +#define dma_encrypted dma_encrypted
>> +
>> +static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
>> +{
>> +    return dma_encrypted(daddr);
>> +}
>> +#define dma_clear_encryption dma_clear_encryption
>> +
>>   #endif    /* __ASM_MEM_ENCRYPT_H */
>
Suzuki K Poulose Feb. 25, 2025, 4:31 p.m. UTC | #5
Hi Gavin

Thanks for the review.

On 25/02/2025 05:28, Gavin Shan wrote:
> On 2/25/25 3:24 PM, Gavin Shan wrote:
>> On 2/20/25 8:07 AM, Suzuki K Poulose wrote:
>>> When a device performs DMA to a shared buffer using physical addresses,
>>> (without Stage1 translation), the device must use the "{I}PA address" 
>>> with the
>>> top bit set in Realm. This is to make sure that a trusted device will 
>>> be able
>>> to write to shared buffers as well as the protected buffers. Thus, a 
>>> Realm must
>>> always program the full address including the "protection" bit, like 
>>> AMD SME
>>> encryption bits.
>>>
>>> Enable this by providing arm64 specific 
>>> dma_{encrypted,decrypted,clear_encryption}
>>> helpers for Realms. Please note that the VMM needs to similarly make 
>>> sure that
>>> the SMMU Stage2 in the Non-secure world is setup accordingly to map 
>>> IPA at the
>>> unprotected alias.
>>>
>>> Cc: Will Deacon <will@kernel.org>
>>> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>> Cc: Robin Murphy <robin.murphy@arm.com>
>>> Cc: Steven Price <steven.price@arm.com>
>>> Cc: Christoph Hellwig <hch@lst.de>
>>> Cc: Tom Lendacky <thomas.lendacky@amd.com>
>>> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
>>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>>> ---
>>>   arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
>>>   1 file changed, 22 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/ 
>>> include/asm/mem_encrypt.h
>>> index f8f78f622dd2..aeda3bba255e 100644
>>> --- a/arch/arm64/include/asm/mem_encrypt.h
>>> +++ b/arch/arm64/include/asm/mem_encrypt.h
>>> @@ -21,4 +21,26 @@ static inline bool force_dma_unencrypted(struct 
>>> device *dev)
>>>       return is_realm_world();
>>>   }
>>> +static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
>>> +{
>>> +    if (is_realm_world())
>>> +        daddr |= prot_ns_shared;
>>> +    return daddr;
>>> +}
>>> +#define dma_decrypted dma_decrypted
>>> +
>>
>> There is an existing macro (PROT_NS_SHARED), which is preferred to return
>> prot_ns_shared or 0 depending on the availability of the realm 
>> capability.
>> However, that macro needs to be improved a bit so that it can be used 
>> here.
>> We need to return 0UL to match with the type of prot_ns_shared 
>> (unsigned long)
>>
>> -#define PROT_NS_SHARED         (is_realm_world() ? prot_ns_shared : 0)
>> +#define PROT_NS_SHARED         (is_realm_world() ? prot_ns_shared : 0UL)
>>
>> After that, the chunk of code can be as below.
>>
>>      return daddr | PROT_NS_SHARED;
>>
>>> +static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
>>> +{
>>> +    if (is_realm_world())
>>> +        daddr &= prot_ns_shared - 1;
>>> +    return daddr;
>>> +}
>>> +#define dma_encrypted dma_encrypted
>>> +
>>
>> With PROT_NS_SHARED, it can become something like below. 
>> (PROT_NS_SHARED - 1)
>> is equivalent to -1UL, 'daddr & -1UL' should be fine since it does 
>> nothing.
>>
> 
> I meant (PROT_NS_SHARED - 1) is equivalent to -1UL when no realm capability
> is around :)

I didn't want this to be there ;-). But with Robin's comment, I think we
can revert back to PROT_NS_SHARED.

Cheers
Suzuki

> 
>>      return daddr & (PROT_NS_SHARED - 1);
>>
>>> +static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
>>> +{
>>> +    return dma_encrypted(daddr);
>>> +}
>>> +#define dma_clear_encryption dma_clear_encryption
>>> +
>>>   #endif    /* __ASM_MEM_ENCRYPT_H */
> 
> Thanks,
> Gavin
>
Suzuki K Poulose Feb. 26, 2025, 10 a.m. UTC | #6
On 25/02/2025 13:04, Robin Murphy wrote:
> On 2025-02-19 10:07 pm, Suzuki K Poulose wrote:
>> When a device performs DMA to a shared buffer using physical addresses,
>> (without Stage1 translation), the device must use the "{I}PA address" 
>> with the
>> top bit set in Realm. This is to make sure that a trusted device will 
>> be able
>> to write to shared buffers as well as the protected buffers. Thus, a 
>> Realm must
>> always program the full address including the "protection" bit, like 
>> AMD SME
>> encryption bits.
>>
>> Enable this by providing arm64 specific 
>> dma_{encrypted,decrypted,clear_encryption}
>> helpers for Realms. Please note that the VMM needs to similarly make 
>> sure that
>> the SMMU Stage2 in the Non-secure world is setup accordingly to map 
>> IPA at the
>> unprotected alias.
>>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Robin Murphy <robin.murphy@arm.com>
>> Cc: Steven Price <steven.price@arm.com>
>> Cc: Christoph Hellwig <hch@lst.de>
>> Cc: Tom Lendacky <thomas.lendacky@amd.com>
>> Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> ---
>>   arch/arm64/include/asm/mem_encrypt.h | 22 ++++++++++++++++++++++
>>   1 file changed, 22 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/ 
>> include/asm/mem_encrypt.h
>> index f8f78f622dd2..aeda3bba255e 100644
>> --- a/arch/arm64/include/asm/mem_encrypt.h
>> +++ b/arch/arm64/include/asm/mem_encrypt.h
>> @@ -21,4 +21,26 @@ static inline bool force_dma_unencrypted(struct 
>> device *dev)
>>       return is_realm_world();
>>   }
>> +static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
>> +{
>> +    if (is_realm_world())
>> +        daddr |= prot_ns_shared;
>> +    return daddr;
>> +}
>> +#define dma_decrypted dma_decrypted
>> +
>> +static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
>> +{
>> +    if (is_realm_world())
>> +        daddr &= prot_ns_shared - 1;
> 
> Nit: is there a reason this isn't the direct inverse of the other 
> operation, i.e. "daddr &= ~prot_ns_shared"? If so, it might be worth 

It could be. The IPA size for the realm is split into half with the
lower half protected/encrypted and anything above that unprotected. 
Technically any addr >= prot_ns_shared is "unencrypted" (even though it
may be invalid, if >= BIT(IPA_Size) - 1). But now when I think of it, it
is much better to trigger a Stage2 fault if the address is illegal 
(i.e., > BIT(IPA_Size) - 1) than corrupting some valid memory, by
masking the top bits (beyond prot_ns_shared).

So, I will fix it.

Suzuki


> dropping a comment why we're doing slightly unintuitive arithmetic on a 
> pagetable attribute (and if not then maybe just do the more obvious 
> thing). I doubt anyone's in a rush to support TBI for DMA, and this 
> would be far from the only potential hiccup for that, but still... :)
> 
> Thanks,
> Robin.
> 
>> +    return daddr;
>> +}
>> +#define dma_encrypted dma_encrypted
>> +
>> +static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
>> +{
>> +    return dma_encrypted(daddr);
>> +}
>> +#define dma_clear_encryption dma_clear_encryption
>> +
>>   #endif    /* __ASM_MEM_ENCRYPT_H */
>
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h
index f8f78f622dd2..aeda3bba255e 100644
--- a/arch/arm64/include/asm/mem_encrypt.h
+++ b/arch/arm64/include/asm/mem_encrypt.h
@@ -21,4 +21,26 @@  static inline bool force_dma_unencrypted(struct device *dev)
 	return is_realm_world();
 }
 
+static inline dma_addr_t dma_decrypted(dma_addr_t daddr)
+{
+	if (is_realm_world())
+		daddr |= prot_ns_shared;
+	return daddr;
+}
+#define dma_decrypted dma_decrypted
+
+static inline dma_addr_t dma_encrypted(dma_addr_t daddr)
+{
+	if (is_realm_world())
+		daddr &= prot_ns_shared - 1;
+	return daddr;
+}
+#define dma_encrypted dma_encrypted
+
+static inline dma_addr_t dma_clear_encryption(dma_addr_t daddr)
+{
+	return dma_encrypted(daddr);
+}
+#define dma_clear_encryption dma_clear_encryption
+
 #endif	/* __ASM_MEM_ENCRYPT_H */