diff mbox series

[06/10] dma: Use free_decrypted_pages()

Message ID 20231017202505.340906-7-rick.p.edgecombe@intel.com (mailing list archive)
State New
Headers show
Series Handle set_memory_XXcrypted() errors | expand

Commit Message

Edgecombe, Rick P Oct. 17, 2023, 8:25 p.m. UTC
On TDX it is possible for the untrusted host to cause
set_memory_encrypted() or set_memory_decrypted() to fail such that an
error is returned and the resulting memory is shared. Callers need to take
care to handle these errors to avoid returning decrypted (shared) memory to
the page allocator, which could lead to functional or security issues.

DMA could free decrypted/shared pages if set_memory_decrypted() fails.
Use the recently added free_decrypted_pages() to avoid this.

Several paths also result in proper encrypted pages being freed through
the same freeing function. Rely on free_decrypted_pages() to not leak the
memory in these cases.

Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: iommu@lists.linux.dev
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
---
 include/linux/dma-map-ops.h | 3 ++-
 kernel/dma/contiguous.c     | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig Oct. 18, 2023, 6:24 a.m. UTC | #1
On Tue, Oct 17, 2023 at 01:25:01PM -0700, Rick Edgecombe wrote:
>  struct cma;
>  
> @@ -165,7 +166,7 @@ static inline struct page *dma_alloc_contiguous(struct device *dev, size_t size,
>  static inline void dma_free_contiguous(struct device *dev, struct page *page,
>  		size_t size)
>  {
> -	__free_pages(page, get_order(size));
> +	free_decrypted_pages((unsigned long)page_address(page), get_order(size));

CMA can be highmem, so this won't work totally independent of what
free_decrypted_pages actually does.  Also please avoid the overly long line.
Edgecombe, Rick P Oct. 18, 2023, 5:09 p.m. UTC | #2
Link to whole series:
https://lore.kernel.org/lkml/20231017202505.340906-1-rick.p.edgecombe@intel.com/

On Wed, 2023-10-18 at 08:24 +0200, Christoph Hellwig wrote:
> On Tue, Oct 17, 2023 at 01:25:01PM -0700, Rick Edgecombe wrote:
> >   struct cma;
> >   
> > @@ -165,7 +166,7 @@ static inline struct page
> > *dma_alloc_contiguous(struct device *dev, size_t size,
> >   static inline void dma_free_contiguous(struct device *dev, struct
> > page *page,
> >                 size_t size)
> >   {
> > -       __free_pages(page, get_order(size));
> > +       free_decrypted_pages((unsigned long)page_address(page),
> > get_order(size));
> 
> CMA can be highmem, so this won't work totally independent of what
> free_decrypted_pages actually does.  Also please avoid the overly
> long line.

Argh, yes this is broken for highmem. Thanks for pointing it out.

For x86, we don't need to worry about doing set_memory_XXcrypted() with
highmem. Checking the Kconfig logic around the other
set_memory_XXcrypted() implementations:
s390 - Doesn't support HIGHMEM
powerpc - Doesn't support set_memory_XXcrypted() and HIGHMEM together

So that would mean set_memory_encrypted() is not needed on the HIGHMEM
configs (i.e. it's ok if there is no virtual mapping at free-time,
because it can skip the conversion work).

So free_decrypted_pages() could be changed to not disturb the HIGHMEM
configs, like this:
static inline void free_decrypted_pages(struct page *page, int order)
{
	void *addr = page_address(page);
	int ret = 0;

	if (addr)
		ret = set_memory_encrypted(addr, 1 << order);

	if (ret) {
		WARN_ONCE(1, "Failed...\n");
		return;
	}
	__free_pages(page, get_order(size));
}

Or we could just fix all the callers to open code the right logic. The
free_decrypted_pages() helper is not really saving code across the
series, and only serving to help callers avoid re-introducing the
issue. But I'm sort of worried it will be easy to do just that. Hmm...
Robin Murphy Oct. 18, 2023, 5:42 p.m. UTC | #3
On 2023-10-17 21:25, Rick Edgecombe wrote:
> On TDX it is possible for the untrusted host to cause
> set_memory_encrypted() or set_memory_decrypted() to fail such that an
> error is returned and the resulting memory is shared. Callers need to take
> care to handle these errors to avoid returning decrypted (shared) memory to
> the page allocator, which could lead to functional or security issues.
> 
> DMA could free decrypted/shared pages if set_memory_decrypted() fails.
> Use the recently added free_decrypted_pages() to avoid this.
> 
> Several paths also result in proper encrypted pages being freed through
> the same freeing function. Rely on free_decrypted_pages() to not leak the
> memory in these cases.

If something's needed in the fallback path here, what about the 
cma_release() paths?

Thanks,
Robin.

> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: iommu@lists.linux.dev
> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> ---
>   include/linux/dma-map-ops.h | 3 ++-
>   kernel/dma/contiguous.c     | 2 +-
>   2 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index f2fc203fb8a1..b0800cbbc357 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -9,6 +9,7 @@
>   #include <linux/dma-mapping.h>
>   #include <linux/pgtable.h>
>   #include <linux/slab.h>
> +#include <linux/set_memory.h>
>   
>   struct cma;
>   
> @@ -165,7 +166,7 @@ static inline struct page *dma_alloc_contiguous(struct device *dev, size_t size,
>   static inline void dma_free_contiguous(struct device *dev, struct page *page,
>   		size_t size)
>   {
> -	__free_pages(page, get_order(size));
> +	free_decrypted_pages((unsigned long)page_address(page), get_order(size));
>   }
>   #endif /* CONFIG_DMA_CMA*/
>   
> diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
> index f005c66f378c..e962f1f6434e 100644
> --- a/kernel/dma/contiguous.c
> +++ b/kernel/dma/contiguous.c
> @@ -429,7 +429,7 @@ void dma_free_contiguous(struct device *dev, struct page *page, size_t size)
>   	}
>   
>   	/* not in any cma, free from buddy */
> -	__free_pages(page, get_order(size));
> +	free_decrypted_pages((unsigned long)page_address(page), get_order(size));
>   }
>   
>   /*
Edgecombe, Rick P Oct. 23, 2023, 4:46 p.m. UTC | #4
On Wed, 2023-10-18 at 18:42 +0100, Robin Murphy wrote:
> On 2023-10-17 21:25, Rick Edgecombe wrote:
> > On TDX it is possible for the untrusted host to cause
> > set_memory_encrypted() or set_memory_decrypted() to fail such that
> > an
> > error is returned and the resulting memory is shared. Callers need
> > to take
> > care to handle these errors to avoid returning decrypted (shared)
> > memory to
> > the page allocator, which could lead to functional or security
> > issues.
> > 
> > DMA could free decrypted/shared pages if set_memory_decrypted()
> > fails.
> > Use the recently added free_decrypted_pages() to avoid this.
> > 
> > Several paths also result in proper encrypted pages being freed
> > through
> > the same freeing function. Rely on free_decrypted_pages() to not
> > leak the
> > memory in these cases.
> 
> If something's needed in the fallback path here, what about the 
> cma_release() paths?

You mean inside cma_release(). If so, unfortunately I think it won't
fit great because there are callers that are never dealing with shared
memory (huge tlb). The reset-to-private operation does extra work that
would be nice to avoid when possible.

The cases I thought exhibited the issue were the two calls sites of
dma_set_decrypted(). Playing around with it, I was thinking it might be
easier to just fix those to open code leaking the pages on
dma_set_decrypted() error. In which case it won't have the re-encrypt
problem.

It make's it less fool proof, but more efficient. And
free_decrypted_pages() doesn't fit great anyway, as pointed out by
Christoph.
Robin Murphy Oct. 23, 2023, 5:22 p.m. UTC | #5
On 2023-10-23 17:46, Edgecombe, Rick P wrote:
> On Wed, 2023-10-18 at 18:42 +0100, Robin Murphy wrote:
>> On 2023-10-17 21:25, Rick Edgecombe wrote:
>>> On TDX it is possible for the untrusted host to cause
>>> set_memory_encrypted() or set_memory_decrypted() to fail such that
>>> an
>>> error is returned and the resulting memory is shared. Callers need
>>> to take
>>> care to handle these errors to avoid returning decrypted (shared)
>>> memory to
>>> the page allocator, which could lead to functional or security
>>> issues.
>>>
>>> DMA could free decrypted/shared pages if set_memory_decrypted()
>>> fails.
>>> Use the recently added free_decrypted_pages() to avoid this.
>>>
>>> Several paths also result in proper encrypted pages being freed
>>> through
>>> the same freeing function. Rely on free_decrypted_pages() to not
>>> leak the
>>> memory in these cases.
>>
>> If something's needed in the fallback path here, what about the
>> cma_release() paths?
> 
> You mean inside cma_release(). If so, unfortunately I think it won't
> fit great because there are callers that are never dealing with shared
> memory (huge tlb). The reset-to-private operation does extra work that
> would be nice to avoid when possible.
> 
> The cases I thought exhibited the issue were the two calls sites of
> dma_set_decrypted(). Playing around with it, I was thinking it might be
> easier to just fix those to open code leaking the pages on
> dma_set_decrypted() error. In which case it won't have the re-encrypt
> problem.
> 
> It make's it less fool proof, but more efficient. And
> free_decrypted_pages() doesn't fit great anyway, as pointed out by
> Christoph.

My point is that in dma_direct_alloc(), we get some memory either 
straight from the page allocator *or* from a CMA area, then call 
set_memory_decrypted() on it. If the problem is that 
set_memory_decrypted() can fail and require cleanup, then logically if 
that cleanup is necessary for the dma_free_contiguous()->__free_pages() 
call, then surely it must also be necessary for the 
dma_free_contiguous()->cma_release()->free_contig_range()->__free_page() 
calls.

Thanks,
Robin.
Edgecombe, Rick P Oct. 23, 2023, 5:27 p.m. UTC | #6
On Mon, 2023-10-23 at 18:22 +0100, Robin Murphy wrote:
> > > 
> > > If something's needed in the fallback path here, what about the
> > > cma_release() paths?
> > 
> > You mean inside cma_release(). If so, unfortunately I think it
> > won't
> > fit great because there are callers that are never dealing with
> > shared
> > memory (huge tlb). The reset-to-private operation does extra work
> > that
> > would be nice to avoid when possible.
> > 
> > The cases I thought exhibited the issue were the two calls sites of
> > dma_set_decrypted(). Playing around with it, I was thinking it
> > might be
> > easier to just fix those to open code leaking the pages on
> > dma_set_decrypted() error. In which case it won't have the re-
> > encrypt
> > problem.
> > 
> > It make's it less fool proof, but more efficient. And
> > free_decrypted_pages() doesn't fit great anyway, as pointed out by
> > Christoph.
> 
> My point is that in dma_direct_alloc(), we get some memory either 
> straight from the page allocator *or* from a CMA area, then call 
> set_memory_decrypted() on it. If the problem is that 
> set_memory_decrypted() can fail and require cleanup, then logically
> if 
> that cleanup is necessary for the dma_free_contiguous()-
> >__free_pages() 
> call, then surely it must also be necessary for the 
> dma_free_contiguous()->cma_release()->free_contig_range()-
> >__free_page() 
> calls.

Oh, I see you are saying the patch misses that case. Yes, makes sense.

Sorry for the confusion. In trying to fix the callers, I waded through
a lot of area's that I didn't have much expertise in and probably
should have marked the whole thing RFC.
diff mbox series

Patch

diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index f2fc203fb8a1..b0800cbbc357 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -9,6 +9,7 @@ 
 #include <linux/dma-mapping.h>
 #include <linux/pgtable.h>
 #include <linux/slab.h>
+#include <linux/set_memory.h>
 
 struct cma;
 
@@ -165,7 +166,7 @@  static inline struct page *dma_alloc_contiguous(struct device *dev, size_t size,
 static inline void dma_free_contiguous(struct device *dev, struct page *page,
 		size_t size)
 {
-	__free_pages(page, get_order(size));
+	free_decrypted_pages((unsigned long)page_address(page), get_order(size));
 }
 #endif /* CONFIG_DMA_CMA*/
 
diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
index f005c66f378c..e962f1f6434e 100644
--- a/kernel/dma/contiguous.c
+++ b/kernel/dma/contiguous.c
@@ -429,7 +429,7 @@  void dma_free_contiguous(struct device *dev, struct page *page, size_t size)
 	}
 
 	/* not in any cma, free from buddy */
-	__free_pages(page, get_order(size));
+	free_decrypted_pages((unsigned long)page_address(page), get_order(size));
 }
 
 /*