diff mbox series

[v16,07/11] secretmem: use PMD-size pages to amortize direct map fragmentation

Message ID 20210121122723.3446-8-rppt@kernel.org (mailing list archive)
State New, archived
Headers show
Series mm: introduce memfd_secret system call to create "secret" memory areas | expand

Commit Message

Mike Rapoport Jan. 21, 2021, 12:27 p.m. UTC
From: Mike Rapoport <rppt@linux.ibm.com>

Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.

Add a gen_pool per secretmem inode and lazily populate this pool with
PMD-size pages.

As pages allocated by secretmem become unmovable, use CMA to back large
page caches so that page allocator won't be surprised by failing attempt to
migrate these pages.

The CMA area used by secretmem is controlled by the "secretmem=" kernel
parameter. This allows explicit control over the memory available for
secretmem and provides upper hard limit for secretmem consumption.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 mm/Kconfig     |   2 +
 mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
 2 files changed, 150 insertions(+), 27 deletions(-)

Comments

Michal Hocko Jan. 26, 2021, 11:46 a.m. UTC | #1
On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Removing a PAGE_SIZE page from the direct map every time such page is
> allocated for a secret memory mapping will cause severe fragmentation of
> the direct map. This fragmentation can be reduced by using PMD-size pages
> as a pool for small pages for secret memory mappings.
> 
> Add a gen_pool per secretmem inode and lazily populate this pool with
> PMD-size pages.
> 
> As pages allocated by secretmem become unmovable, use CMA to back large
> page caches so that page allocator won't be surprised by failing attempt to
> migrate these pages.
> 
> The CMA area used by secretmem is controlled by the "secretmem=" kernel
> parameter. This allows explicit control over the memory available for
> secretmem and provides upper hard limit for secretmem consumption.

OK, so I have finally had a look at this closer and this is really not
acceptable. I have already mentioned that in a response to other patch
but any task is able to deprive access to secret memory to other tasks
and cause OOM killer which wouldn't really recover ever and potentially
panic the system. Now you could be less drastic and only make SIGBUS on
fault but that would be still quite terrible. There is a very good
reason why hugetlb implements is non-trivial reservation system to avoid
exactly these problems.

So unless I am really misreading the code
Nacked-by: Michal Hocko <mhocko@suse.com>

That doesn't mean I reject the whole idea. There are some details to
sort out as mentioned elsewhere but you cannot really depend on
pre-allocated pool which can fail at a fault time like that.

> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
>  mm/Kconfig     |   2 +
>  mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
>  2 files changed, 150 insertions(+), 27 deletions(-)
> 
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 5f8243442f66..ec35bf406439 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -874,5 +874,7 @@ config KMAP_LOCAL
>  
>  config SECRETMEM
>  	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
> +	select GENERIC_ALLOCATOR
> +	select CMA
>  
>  endmenu
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 904351d12c33..469211c7cc3a 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -7,12 +7,15 @@
>  
>  #include <linux/mm.h>
>  #include <linux/fs.h>
> +#include <linux/cma.h>
>  #include <linux/mount.h>
>  #include <linux/memfd.h>
>  #include <linux/bitops.h>
>  #include <linux/printk.h>
>  #include <linux/pagemap.h>
> +#include <linux/genalloc.h>
>  #include <linux/syscalls.h>
> +#include <linux/memblock.h>
>  #include <linux/pseudo_fs.h>
>  #include <linux/secretmem.h>
>  #include <linux/set_memory.h>
> @@ -35,24 +38,94 @@
>  #define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
>  
>  struct secretmem_ctx {
> +	struct gen_pool *pool;
>  	unsigned int mode;
>  };
>  
> -static struct page *secretmem_alloc_page(gfp_t gfp)
> +static struct cma *secretmem_cma;
> +
> +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  {
> +	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> +	struct gen_pool *pool = ctx->pool;
> +	unsigned long addr;
> +	struct page *page;
> +	int i, err;
> +
> +	page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
> +	if (!page)
> +		return -ENOMEM;
> +
>  	/*
> -	 * FIXME: use a cache of large pages to reduce the direct map
> -	 * fragmentation
> +	 * clear the data left from the prevoius user before dropping the
> +	 * pages from the direct map
>  	 */
> -	return alloc_page(gfp | __GFP_ZERO);
> +	for (i = 0; i < nr_pages; i++)
> +		clear_highpage(page + i);
> +
> +	err = set_direct_map_invalid_noflush(page, nr_pages);
> +	if (err)
> +		goto err_cma_release;
> +
> +	addr = (unsigned long)page_address(page);
> +	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> +	if (err)
> +		goto err_set_direct_map;
> +
> +	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
> +
> +	return 0;
> +
> +err_set_direct_map:
> +	/*
> +	 * If a split of PUD-size page was required, it already happened
> +	 * when we marked the pages invalid which guarantees that this call
> +	 * won't fail
> +	 */
> +	set_direct_map_default_noflush(page, nr_pages);
> +err_cma_release:
> +	cma_release(secretmem_cma, page, nr_pages);
> +	return err;
> +}
> +
> +static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
> +{
> +	unsigned long addr = (unsigned long)page_address(page);
> +	struct gen_pool *pool = ctx->pool;
> +
> +	gen_pool_free(pool, addr, PAGE_SIZE);
> +}
> +
> +static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
> +					 gfp_t gfp)
> +{
> +	struct gen_pool *pool = ctx->pool;
> +	unsigned long addr;
> +	struct page *page;
> +	int err;
> +
> +	if (gen_pool_avail(pool) < PAGE_SIZE) {
> +		err = secretmem_pool_increase(ctx, gfp);
> +		if (err)
> +			return NULL;
> +	}
> +
> +	addr = gen_pool_alloc(pool, PAGE_SIZE);
> +	if (!addr)
> +		return NULL;
> +
> +	page = virt_to_page(addr);
> +	get_page(page);
> +
> +	return page;
>  }
>  
>  static vm_fault_t secretmem_fault(struct vm_fault *vmf)
>  {
> +	struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
>  	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
>  	struct inode *inode = file_inode(vmf->vma->vm_file);
>  	pgoff_t offset = vmf->pgoff;
> -	unsigned long addr;
>  	struct page *page;
>  	int err;
>  
> @@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
>  retry:
>  	page = find_lock_page(mapping, offset);
>  	if (!page) {
> -		page = secretmem_alloc_page(vmf->gfp_mask);
> +		page = secretmem_alloc_page(ctx, vmf->gfp_mask);
>  		if (!page)
>  			return VM_FAULT_OOM;
>  
> -		err = set_direct_map_invalid_noflush(page, 1);
> -		if (err) {
> -			put_page(page);
> -			return vmf_error(err);
> -		}
> -
>  		__SetPageUptodate(page);
>  		err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
>  		if (unlikely(err)) {
> +			secretmem_free_page(ctx, page);
>  			put_page(page);
>  			if (err == -EEXIST)
>  				goto retry;
> -			goto err_restore_direct_map;
> +			return vmf_error(err);
>  		}
>  
> -		addr = (unsigned long)page_address(page);
> -		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +		set_page_private(page, (unsigned long)ctx);
>  	}
>  
>  	vmf->page = page;
>  	return VM_FAULT_LOCKED;
> -
> -err_restore_direct_map:
> -	/*
> -	 * If a split of large page was required, it already happened
> -	 * when we marked the page invalid which guarantees that this call
> -	 * won't fail
> -	 */
> -	set_direct_map_default_noflush(page, 1);
> -	return vmf_error(err);
>  }
>  
>  static const struct vm_operations_struct secretmem_vm_ops = {
> @@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping,
>  
>  static void secretmem_freepage(struct page *page)
>  {
> -	set_direct_map_default_noflush(page, 1);
> -	clear_highpage(page);
> +	struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
> +
> +	secretmem_free_page(ctx, page);
>  }
>  
>  static const struct address_space_operations secretmem_aops = {
> @@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags)
>  	if (!ctx)
>  		goto err_free_inode;
>  
> +	ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> +	if (!ctx->pool)
> +		goto err_free_ctx;
> +
>  	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
>  				 O_RDWR, &secretmem_fops);
>  	if (IS_ERR(file))
> -		goto err_free_ctx;
> +		goto err_free_pool;
>  
>  	mapping_set_unevictable(inode->i_mapping);
>  
> +	inode->i_private = ctx;
>  	inode->i_mapping->private_data = ctx;
>  	inode->i_mapping->a_ops = &secretmem_aops;
>  
> @@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags)
>  
>  	return file;
>  
> +err_free_pool:
> +	gen_pool_destroy(ctx->pool);
>  err_free_ctx:
>  	kfree(ctx);
>  err_free_inode:
> @@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
>  	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
>  		return -EINVAL;
>  
> +	if (!secretmem_cma)
> +		return -ENOMEM;
> +
>  	fd = get_unused_fd_flags(flags & O_CLOEXEC);
>  	if (fd < 0)
>  		return fd;
> @@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
>  	return err;
>  }
>  
> +static void secretmem_cleanup_chunk(struct gen_pool *pool,
> +				    struct gen_pool_chunk *chunk, void *data)
> +{
> +	unsigned long start = chunk->start_addr;
> +	unsigned long end = chunk->end_addr;
> +	struct page *page = virt_to_page(start);
> +	unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
> +	int i;
> +
> +	set_direct_map_default_noflush(page, nr_pages);
> +
> +	for (i = 0; i < nr_pages; i++)
> +		clear_highpage(page + i);
> +
> +	cma_release(secretmem_cma, page, nr_pages);
> +}
> +
> +static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
> +{
> +	struct gen_pool *pool = ctx->pool;
> +
> +	gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
> +	gen_pool_destroy(pool);
> +}
> +
>  static void secretmem_evict_inode(struct inode *inode)
>  {
>  	struct secretmem_ctx *ctx = inode->i_private;
>  
>  	truncate_inode_pages_final(&inode->i_data);
> +	secretmem_cleanup_pool(ctx);
>  	clear_inode(inode);
>  	kfree(ctx);
>  }
> @@ -276,3 +371,29 @@ static int secretmem_init(void)
>  	return ret;
>  }
>  fs_initcall(secretmem_init);
> +
> +static int __init secretmem_setup(char *str)
> +{
> +	phys_addr_t align = PMD_SIZE;
> +	unsigned long reserved_size;
> +	int err;
> +
> +	reserved_size = memparse(str, NULL);
> +	if (!reserved_size)
> +		return 0;
> +
> +	if (reserved_size * 2 > PUD_SIZE)
> +		align = PUD_SIZE;
> +
> +	err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
> +				     "secretmem", &secretmem_cma);
> +	if (err) {
> +		pr_err("failed to create CMA: %d\n", err);
> +		return err;
> +	}
> +
> +	pr_info("reserved %luM\n", reserved_size >> 20);
> +
> +	return 0;
> +}
> +__setup("secretmem=", secretmem_setup);
> -- 
> 2.28.0
>
David Hildenbrand Jan. 26, 2021, 11:56 a.m. UTC | #2
On 26.01.21 12:46, Michal Hocko wrote:
> On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Removing a PAGE_SIZE page from the direct map every time such page is
>> allocated for a secret memory mapping will cause severe fragmentation of
>> the direct map. This fragmentation can be reduced by using PMD-size pages
>> as a pool for small pages for secret memory mappings.
>>
>> Add a gen_pool per secretmem inode and lazily populate this pool with
>> PMD-size pages.
>>
>> As pages allocated by secretmem become unmovable, use CMA to back large
>> page caches so that page allocator won't be surprised by failing attempt to
>> migrate these pages.
>>
>> The CMA area used by secretmem is controlled by the "secretmem=" kernel
>> parameter. This allows explicit control over the memory available for
>> secretmem and provides upper hard limit for secretmem consumption.
> 
> OK, so I have finally had a look at this closer and this is really not
> acceptable. I have already mentioned that in a response to other patch
> but any task is able to deprive access to secret memory to other tasks
> and cause OOM killer which wouldn't really recover ever and potentially
> panic the system. Now you could be less drastic and only make SIGBUS on
> fault but that would be still quite terrible. There is a very good
> reason why hugetlb implements is non-trivial reservation system to avoid
> exactly these problems.
> 
> So unless I am really misreading the code
> Nacked-by: Michal Hocko <mhocko@suse.com>
> 
> That doesn't mean I reject the whole idea. There are some details to
> sort out as mentioned elsewhere but you cannot really depend on
> pre-allocated pool which can fail at a fault time like that.

So, to do it similar to hugetlbfs (e.g., with CMA), there would have to 
be a mechanism to actually try pre-reserving (e.g., from the CMA area), 
at which point in time the pages would get moved to the secretmem pool, 
and a mechanism for mmap() etc. to "reserve" from these secretmem pool, 
such that there are guarantees at fault time?

What we have right now feels like some kind of overcommit (reading, as 
overcommiting huge pages, so we might get SIGBUS at fault time).

TBH, the SIGBUS thingy doesn't sound terrible to me - if this behavior 
is to be expected right now by applications using it and they can handle 
it - no guarantees. I fully agree that some kind of 
reservation/guarantee mechanism would be preferable.
Michal Hocko Jan. 26, 2021, 12:08 p.m. UTC | #3
On Tue 26-01-21 12:56:48, David Hildenbrand wrote:
> On 26.01.21 12:46, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > 
> > > Removing a PAGE_SIZE page from the direct map every time such page is
> > > allocated for a secret memory mapping will cause severe fragmentation of
> > > the direct map. This fragmentation can be reduced by using PMD-size pages
> > > as a pool for small pages for secret memory mappings.
> > > 
> > > Add a gen_pool per secretmem inode and lazily populate this pool with
> > > PMD-size pages.
> > > 
> > > As pages allocated by secretmem become unmovable, use CMA to back large
> > > page caches so that page allocator won't be surprised by failing attempt to
> > > migrate these pages.
> > > 
> > > The CMA area used by secretmem is controlled by the "secretmem=" kernel
> > > parameter. This allows explicit control over the memory available for
> > > secretmem and provides upper hard limit for secretmem consumption.
> > 
> > OK, so I have finally had a look at this closer and this is really not
> > acceptable. I have already mentioned that in a response to other patch
> > but any task is able to deprive access to secret memory to other tasks
> > and cause OOM killer which wouldn't really recover ever and potentially
> > panic the system. Now you could be less drastic and only make SIGBUS on
> > fault but that would be still quite terrible. There is a very good
> > reason why hugetlb implements is non-trivial reservation system to avoid
> > exactly these problems.
> > 
> > So unless I am really misreading the code
> > Nacked-by: Michal Hocko <mhocko@suse.com>
> > 
> > That doesn't mean I reject the whole idea. There are some details to
> > sort out as mentioned elsewhere but you cannot really depend on
> > pre-allocated pool which can fail at a fault time like that.
> 
> So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> point in time the pages would get moved to the secretmem pool, and a
> mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> there are guarantees at fault time?

yes, reserve at mmap time and use during the fault. But this all sounds
like a self inflicted problem to me. Sure you can have a pre-allocated
or more dynamic pool to reduce the direct mapping fragmentation but you
can always fall back to regular allocatios. In other ways have the pool
as an optimization rather than a hard requirement. With a careful access
control this sounds like a manageable solution to me.
Mike Rapoport Jan. 28, 2021, 9:22 a.m. UTC | #4
On Tue, Jan 26, 2021 at 01:08:23PM +0100, Michal Hocko wrote:
> On Tue 26-01-21 12:56:48, David Hildenbrand wrote:
> > On 26.01.21 12:46, Michal Hocko wrote:
> > > On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > 
> > > > Removing a PAGE_SIZE page from the direct map every time such page is
> > > > allocated for a secret memory mapping will cause severe fragmentation of
> > > > the direct map. This fragmentation can be reduced by using PMD-size pages
> > > > as a pool for small pages for secret memory mappings.
> > > > 
> > > > Add a gen_pool per secretmem inode and lazily populate this pool with
> > > > PMD-size pages.
> > > > 
> > > > As pages allocated by secretmem become unmovable, use CMA to back large
> > > > page caches so that page allocator won't be surprised by failing attempt to
> > > > migrate these pages.
> > > > 
> > > > The CMA area used by secretmem is controlled by the "secretmem=" kernel
> > > > parameter. This allows explicit control over the memory available for
> > > > secretmem and provides upper hard limit for secretmem consumption.
> > > 
> > > OK, so I have finally had a look at this closer and this is really not
> > > acceptable. I have already mentioned that in a response to other patch
> > > but any task is able to deprive access to secret memory to other tasks
> > > and cause OOM killer which wouldn't really recover ever and potentially
> > > panic the system. Now you could be less drastic and only make SIGBUS on
> > > fault but that would be still quite terrible. There is a very good
> > > reason why hugetlb implements is non-trivial reservation system to avoid
> > > exactly these problems.

So, if I understand your concerns correct this implementation has two
issues:
1) allocation failure at page fault that causes unrecoverable OOM and
2) a possibility for an unprivileged user to deplete secretmem pool and
cause (1) to others

I'm not really familiar with OOM internals, but when I simulated an
allocation failure in my testing only the allocating process and it's
parent were OOM-killed and then the system continued normally. 

You are right, it would be better if we SIGBUS instead of OOM but I don't
agree SIGBUS is terrible. As we started to draw parallels with hugetlbfs
even despite it's complex reservation system, hugetlb_fault() may fail to
allocate pages from CMA and this still will cause SIGBUS.

And hugetlb pools may be also depleted by anybody by calling
mmap(MAP_HUGETLB) and there is no any limiting knob for this, while
secretmem has RLIMIT_MEMLOCK.

That said, simply replacing VM_FAULT_OOM with VM_FAULT_SIGBUS makes
secretmem at least as controllable and robust than hugeltbfs even without
complex reservation at mmap() time.

> > > So unless I am really misreading the code
> > > Nacked-by: Michal Hocko <mhocko@suse.com>
> > > 
> > > That doesn't mean I reject the whole idea. There are some details to
> > > sort out as mentioned elsewhere but you cannot really depend on
> > > pre-allocated pool which can fail at a fault time like that.
> > 
> > So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> > mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> > point in time the pages would get moved to the secretmem pool, and a
> > mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> > there are guarantees at fault time?
> 
> yes, reserve at mmap time and use during the fault. But this all sounds
> like a self inflicted problem to me. Sure you can have a pre-allocated
> or more dynamic pool to reduce the direct mapping fragmentation but you
> can always fall back to regular allocatios. In other ways have the pool
> as an optimization rather than a hard requirement. With a careful access
> control this sounds like a manageable solution to me.

I'd really wish we had this discussion for earlier spins of this series,
but since this didn't happen let's refresh the history a bit.

One of the major pushbacks on the first RFC [1] of the concept was about
the direct map fragmentation. I tried really hard to find data that shows
what is the performance difference with different page sizes in the direct
map and I didn't find anything.

So presuming that large pages do provide advantage the first implementation
of secretmem used PMD_ORDER allocations to amortise the effect of the
direct map fragmentation and then handed out 4k pages at each fault. In
addition there was an option to reserve a finite pool at boot time and
limit secretmem allocations only to that pool.

At some point David suggested to use CMA to improve overall flexibility
[3], so I switched secretmem to use CMA.

Now, with the data we have at hand (my benchmarks and Intel's report David
mentioned) I'm even not sure this whole pooling even required.

I like the idea to have a pool as an optimization rather than a hard
requirement but I don't see why would it need a careful access control. As
the direct map fragmentation is not necessarily degrades the performance
(and even sometimes it actually improves it) and even then the degradation
is small, trying a PMD_ORDER allocation for a pool and then falling back to
4K page may be just fine.

I think we could have something like this (error handling is mostly
omitted):

static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
	struct page *page = alloc_pages(gfp, PMD_PAGE_ORDER);

	if (!page)
		return -ENOMEM;

	/* add large page to pool */
	
	return 0;
}

static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
					 gfp_t gfp)
{
	struct page *page;
	...

	if (gen_pool_avail(pool) < PAGE_SIZE) {
		err = secretmem_pool_increase(ctx, gfp);
		if (!err) {
			addr = gen_pool_alloc(pool, PAGE_SIZE);
			if (addr)
				page = virt_to_page(addr);
		}
	}

	if (!page)
		page = alloc_page(gfp);

	return page;	
}

[1] https://lore.kernel.org/lkml/1572171452-7958-1-git-send-email-rppt@kernel.org/
[2] https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org/
[3] https://lore.kernel.org/lkml/03ec586d-c00c-c57e-3118-7186acb7b823@redhat.com/#t
Michal Hocko Jan. 28, 2021, 1:01 p.m. UTC | #5
On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 01:08:23PM +0100, Michal Hocko wrote:
> > On Tue 26-01-21 12:56:48, David Hildenbrand wrote:
> > > On 26.01.21 12:46, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > 
> > > > > Removing a PAGE_SIZE page from the direct map every time such page is
> > > > > allocated for a secret memory mapping will cause severe fragmentation of
> > > > > the direct map. This fragmentation can be reduced by using PMD-size pages
> > > > > as a pool for small pages for secret memory mappings.
> > > > > 
> > > > > Add a gen_pool per secretmem inode and lazily populate this pool with
> > > > > PMD-size pages.
> > > > > 
> > > > > As pages allocated by secretmem become unmovable, use CMA to back large
> > > > > page caches so that page allocator won't be surprised by failing attempt to
> > > > > migrate these pages.
> > > > > 
> > > > > The CMA area used by secretmem is controlled by the "secretmem=" kernel
> > > > > parameter. This allows explicit control over the memory available for
> > > > > secretmem and provides upper hard limit for secretmem consumption.
> > > > 
> > > > OK, so I have finally had a look at this closer and this is really not
> > > > acceptable. I have already mentioned that in a response to other patch
> > > > but any task is able to deprive access to secret memory to other tasks
> > > > and cause OOM killer which wouldn't really recover ever and potentially
> > > > panic the system. Now you could be less drastic and only make SIGBUS on
> > > > fault but that would be still quite terrible. There is a very good
> > > > reason why hugetlb implements is non-trivial reservation system to avoid
> > > > exactly these problems.
> 
> So, if I understand your concerns correct this implementation has two
> issues:
> 1) allocation failure at page fault that causes unrecoverable OOM and
> 2) a possibility for an unprivileged user to deplete secretmem pool and
> cause (1) to others
> 
> I'm not really familiar with OOM internals, but when I simulated an
> allocation failure in my testing only the allocating process and it's
> parent were OOM-killed and then the system continued normally. 

If you kill the allocating process then yes, it would work, but your
process might be the very last to be selected.

> You are right, it would be better if we SIGBUS instead of OOM but I don't
> agree SIGBUS is terrible. As we started to draw parallels with hugetlbfs
> even despite it's complex reservation system, hugetlb_fault() may fail to
> allocate pages from CMA and this still will cause SIGBUS.

This is an unexpected runtime error. Unless you make it an integral part
of the API design.

> And hugetlb pools may be also depleted by anybody by calling
> mmap(MAP_HUGETLB) and there is no any limiting knob for this, while
> secretmem has RLIMIT_MEMLOCK.

Yes it can fail. But it would fail at the mmap time when the reservation
fails. Not during the #PF time which can be at any time.

> That said, simply replacing VM_FAULT_OOM with VM_FAULT_SIGBUS makes
> secretmem at least as controllable and robust than hugeltbfs even without
> complex reservation at mmap() time.

Still sucks huge!

> > > > So unless I am really misreading the code
> > > > Nacked-by: Michal Hocko <mhocko@suse.com>
> > > > 
> > > > That doesn't mean I reject the whole idea. There are some details to
> > > > sort out as mentioned elsewhere but you cannot really depend on
> > > > pre-allocated pool which can fail at a fault time like that.
> > > 
> > > So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> > > mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> > > point in time the pages would get moved to the secretmem pool, and a
> > > mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> > > there are guarantees at fault time?
> > 
> > yes, reserve at mmap time and use during the fault. But this all sounds
> > like a self inflicted problem to me. Sure you can have a pre-allocated
> > or more dynamic pool to reduce the direct mapping fragmentation but you
> > can always fall back to regular allocatios. In other ways have the pool
> > as an optimization rather than a hard requirement. With a careful access
> > control this sounds like a manageable solution to me.
> 
> I'd really wish we had this discussion for earlier spins of this series,
> but since this didn't happen let's refresh the history a bit.

I am sorry but I am really fighting to find time to watch for all the
moving targets...

> One of the major pushbacks on the first RFC [1] of the concept was about
> the direct map fragmentation. I tried really hard to find data that shows
> what is the performance difference with different page sizes in the direct
> map and I didn't find anything.
> 
> So presuming that large pages do provide advantage the first implementation
> of secretmem used PMD_ORDER allocations to amortise the effect of the
> direct map fragmentation and then handed out 4k pages at each fault. In
> addition there was an option to reserve a finite pool at boot time and
> limit secretmem allocations only to that pool.
> 
> At some point David suggested to use CMA to improve overall flexibility
> [3], so I switched secretmem to use CMA.
> 
> Now, with the data we have at hand (my benchmarks and Intel's report David
> mentioned) I'm even not sure this whole pooling even required.

I would still like to understand whether that data is actually
representative. With some underlying reasoning rather than I have run
these XYZ benchmarks and numbers do not look terrible.

> I like the idea to have a pool as an optimization rather than a hard
> requirement but I don't see why would it need a careful access control. As
> the direct map fragmentation is not necessarily degrades the performance
> (and even sometimes it actually improves it) and even then the degradation
> is small, trying a PMD_ORDER allocation for a pool and then falling back to
> 4K page may be just fine.

Well, as soon as this is a scarce resource then an access control seems
like a first thing to think of. Maybe it is not really necessary but
then this should be really justified.

I am also still not sure why this whole thing is not just a
ramdisk/ramfs which happens to unmap its pages from the direct
map. Wouldn't that be a much more easier model to work with? You would
get an access control for free as well.
Christoph Lameter (Ampere) Jan. 28, 2021, 1:28 p.m. UTC | #6
On Thu, 28 Jan 2021, Michal Hocko wrote:

> > So, if I understand your concerns correct this implementation has two
> > issues:
> > 1) allocation failure at page fault that causes unrecoverable OOM and
> > 2) a possibility for an unprivileged user to deplete secretmem pool and
> > cause (1) to others
> >
> > I'm not really familiar with OOM internals, but when I simulated an
> > allocation failure in my testing only the allocating process and it's
> > parent were OOM-killed and then the system continued normally.
>
> If you kill the allocating process then yes, it would work, but your
> process might be the very last to be selected.

OOMs are different if you have a "constrained allocation". In that case it
is the fault of the process who wanted memory with certain conditions.
That memory is not available. General memory is available though. In that
case the allocating process is killed.
Michal Hocko Jan. 28, 2021, 1:49 p.m. UTC | #7
On Thu 28-01-21 13:28:10, Cristopher Lameter wrote:
> On Thu, 28 Jan 2021, Michal Hocko wrote:
> 
> > > So, if I understand your concerns correct this implementation has two
> > > issues:
> > > 1) allocation failure at page fault that causes unrecoverable OOM and
> > > 2) a possibility for an unprivileged user to deplete secretmem pool and
> > > cause (1) to others
> > >
> > > I'm not really familiar with OOM internals, but when I simulated an
> > > allocation failure in my testing only the allocating process and it's
> > > parent were OOM-killed and then the system continued normally.
> >
> > If you kill the allocating process then yes, it would work, but your
> > process might be the very last to be selected.
> 
> OOMs are different if you have a "constrained allocation". In that case it
> is the fault of the process who wanted memory with certain conditions.
> That memory is not available. General memory is available though. In that
> case the allocating process is killed.

I do not see this implementation would do anything like that. Neither
anything like that implemented in the oom killer. Constrained
allocations (cpusets/memcg/mempolicy) only do restrict their selection
to processes which belong to the same domain. So I am not really sure
what you are referring to. The is only a global knob to _always_ kill
the allocating process on OOM.
James Bottomley Jan. 28, 2021, 3:28 p.m. UTC | #8
On Thu, 2021-01-28 at 14:01 +0100, Michal Hocko wrote:
> On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
[...]
> > One of the major pushbacks on the first RFC [1] of the concept was
> > about the direct map fragmentation. I tried really hard to find
> > data that shows what is the performance difference with different
> > page sizes in the direct map and I didn't find anything.
> > 
> > So presuming that large pages do provide advantage the first
> > implementation of secretmem used PMD_ORDER allocations to amortise
> > the effect of the direct map fragmentation and then handed out 4k
> > pages at each fault. In addition there was an option to reserve a
> > finite pool at boot time and limit secretmem allocations only to
> > that pool.
> > 
> > At some point David suggested to use CMA to improve overall
> > flexibility [3], so I switched secretmem to use CMA.
> > 
> > Now, with the data we have at hand (my benchmarks and Intel's
> > report David mentioned) I'm even not sure this whole pooling even
> > required.
> 
> I would still like to understand whether that data is actually
> representative. With some underlying reasoning rather than I have run
> these XYZ benchmarks and numbers do not look terrible.

My theory, and the reason I made Mike run the benchmarks, is that our
fear of TLB miss has been alleviated by CPU speculation advances over
the years.  You can appreciate this if you think that both Intel and
AMD have increased the number of levels in the page table to
accommodate larger virtual memory size 5 instead of 3.  That increases
the length of the page walk nearly 2x in a physical system and even
more in a virtual system.  Unless this were massively optimized,
systems would have slowed down significantly.  Using 2M pages only
eliminates one level and 2G pages eliminates 2, so I theorized that
actually fragmentation wouldn't be the significant problem we once
thought it was and asked Mike to benchmark it.

The benchmarks show that indeed, it isn't a huge change in the data TLB
miss time, I suspect because data is nicely continuous nowadays and the
prediction that goes into the CPU optimizations quite easy.  ITLB
fragmentation actually seems to be quite a bit worse, likely because we
still don't have branch prediction down to an exact science.

James
Christoph Lameter (Ampere) Jan. 28, 2021, 3:56 p.m. UTC | #9
On Thu, 28 Jan 2021, Michal Hocko wrote:

> > > If you kill the allocating process then yes, it would work, but your
> > > process might be the very last to be selected.
> >
> > OOMs are different if you have a "constrained allocation". In that case it
> > is the fault of the process who wanted memory with certain conditions.
> > That memory is not available. General memory is available though. In that
> > case the allocating process is killed.
>
> I do not see this implementation would do anything like that. Neither
> anything like that implemented in the oom killer. Constrained
> allocations (cpusets/memcg/mempolicy) only do restrict their selection
> to processes which belong to the same domain. So I am not really sure
> what you are referring to. The is only a global knob to _always_ kill
> the allocating process on OOM.

Constrained allocations refer to allocations where the NUMA nodes are
restricted or something else does not allow the use of arbitrary memory.
The OOM killer changes its behavior. In the past we fell back to killing
the calling process.

See constrained_alloc() in mm/oom_kill.c

static const char * const oom_constraint_text[] = {
        [CONSTRAINT_NONE] = "CONSTRAINT_NONE",
        [CONSTRAINT_CPUSET] = "CONSTRAINT_CPUSET",
        [CONSTRAINT_MEMORY_POLICY] = "CONSTRAINT_MEMORY_POLICY",
        [CONSTRAINT_MEMCG] = "CONSTRAINT_MEMCG",
};

/*
 * Determine the type of allocation constraint.
 */
static enum oom_constraint constrained_alloc(struct oom_control *oc)
{
Michal Hocko Jan. 28, 2021, 4:23 p.m. UTC | #10
On Thu 28-01-21 15:56:36, Cristopher Lameter wrote:
> On Thu, 28 Jan 2021, Michal Hocko wrote:
> 
> > > > If you kill the allocating process then yes, it would work, but your
> > > > process might be the very last to be selected.
> > >
> > > OOMs are different if you have a "constrained allocation". In that case it
> > > is the fault of the process who wanted memory with certain conditions.
> > > That memory is not available. General memory is available though. In that
> > > case the allocating process is killed.
> >
> > I do not see this implementation would do anything like that. Neither
> > anything like that implemented in the oom killer. Constrained
> > allocations (cpusets/memcg/mempolicy) only do restrict their selection
> > to processes which belong to the same domain. So I am not really sure
> > what you are referring to. The is only a global knob to _always_ kill
> > the allocating process on OOM.
> 
> Constrained allocations refer to allocations where the NUMA nodes are
> restricted or something else does not allow the use of arbitrary memory.
> The OOM killer changes its behavior.

Yes as described in the above paragraph.

> In the past we fell back to killing the calling process.

Yeah, but this is no longer the case since 6f48d0ebd907a (more than 10
years ago.

Anyway this is not really important because if you want to kill the
allocating task because there is no chance the fault can succed then
there is a SIGBUS as already mentioned.
James Bottomley Jan. 28, 2021, 9:05 p.m. UTC | #11
On Thu, 2021-01-28 at 14:01 +0100, Michal Hocko wrote:
> On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
[...]
> > I like the idea to have a pool as an optimization rather than a
> > hard requirement but I don't see why would it need a careful access
> > control. As the direct map fragmentation is not necessarily
> > degrades the performance (and even sometimes it actually improves
> > it) and even then the degradation is small, trying a PMD_ORDER
> > allocation for a pool and then falling back to 4K page may be just
> > fine.
> 
> Well, as soon as this is a scarce resource then an access control
> seems like a first thing to think of. Maybe it is not really
> necessary but then this should be really justified.

The control for the resource is effectively the rlimit today.  I don't
think dividing the world into people who can and can't use secret
memory would be useful since the design is to be usable for anyone who
might have a secret to keep; it would become like the kvm group
permissions: something which is theoretically an access control but
which in practise is given to everyone on the system.

> I am also still not sure why this whole thing is not just a
> ramdisk/ramfs which happens to unmap its pages from the direct
> map. Wouldn't that be a much more easier model to work with? You
> would get an access control for free as well.

The original API was a memfd which does have this access control as
well.  However, the decision was made after much discussion to go with
a new system call instead.  Obviously the API choice could be revisited
but do you have anything to add over the previous discussion, or is
this just to get your access control?

James
Mike Rapoport Jan. 29, 2021, 7:03 a.m. UTC | #12
On Thu, Jan 28, 2021 at 07:28:57AM -0800, James Bottomley wrote:
> On Thu, 2021-01-28 at 14:01 +0100, Michal Hocko wrote:
> > On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
> [...]
> > > One of the major pushbacks on the first RFC [1] of the concept was
> > > about the direct map fragmentation. I tried really hard to find
> > > data that shows what is the performance difference with different
> > > page sizes in the direct map and I didn't find anything.
> > > 
> > > So presuming that large pages do provide advantage the first
> > > implementation of secretmem used PMD_ORDER allocations to amortise
> > > the effect of the direct map fragmentation and then handed out 4k
> > > pages at each fault. In addition there was an option to reserve a
> > > finite pool at boot time and limit secretmem allocations only to
> > > that pool.
> > > 
> > > At some point David suggested to use CMA to improve overall
> > > flexibility [3], so I switched secretmem to use CMA.
> > > 
> > > Now, with the data we have at hand (my benchmarks and Intel's
> > > report David mentioned) I'm even not sure this whole pooling even
> > > required.
> > 
> > I would still like to understand whether that data is actually
> > representative. With some underlying reasoning rather than I have run
> > these XYZ benchmarks and numbers do not look terrible.
> 
> My theory, and the reason I made Mike run the benchmarks, is that our
> fear of TLB miss has been alleviated by CPU speculation advances over
> the years.  You can appreciate this if you think that both Intel and
> AMD have increased the number of levels in the page table to
> accommodate larger virtual memory size 5 instead of 3.  That increases
> the length of the page walk nearly 2x in a physical system and even
> more in a virtual system.  Unless this were massively optimized,
> systems would have slowed down significantly.  Using 2M pages only
> eliminates one level and 2G pages eliminates 2, so I theorized that
> actually fragmentation wouldn't be the significant problem we once
> thought it was and asked Mike to benchmark it.
> 
> The benchmarks show that indeed, it isn't a huge change in the data TLB
> miss time, I suspect because data is nicely continuous nowadays and the
> prediction that goes into the CPU optimizations quite easy.  ITLB
> fragmentation actually seems to be quite a bit worse, likely because we
> still don't have branch prediction down to an exact science.

Another thing is that normally useful work done by userspace so data
accesses are dominated by userspace and any change in dTLB miss rate for
kernel data accesses is only a small fraction of all misses.

> James
> 
>
Mike Rapoport Jan. 29, 2021, 7:21 a.m. UTC | #13
On Thu, Jan 28, 2021 at 02:01:06PM +0100, Michal Hocko wrote:
> On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
> 
> > And hugetlb pools may be also depleted by anybody by calling
> > mmap(MAP_HUGETLB) and there is no any limiting knob for this, while
> > secretmem has RLIMIT_MEMLOCK.
> 
> Yes it can fail. But it would fail at the mmap time when the reservation
> fails. Not during the #PF time which can be at any time.

It may fail at $PF time as well:

hugetlb_fault()
        hugeltb_no_page()
                ...
                alloc_huge_page()
                        alloc_gigantic_page()
                                cma_alloc()
                                        -ENOMEM; 

 
> > That said, simply replacing VM_FAULT_OOM with VM_FAULT_SIGBUS makes
> > secretmem at least as controllable and robust than hugeltbfs even without
> > complex reservation at mmap() time.
> 
> Still sucks huge!
 
Any #PF can get -ENOMEM for whatever reason. Sucks huge indeed.

> > > > > So unless I am really misreading the code
> > > > > Nacked-by: Michal Hocko <mhocko@suse.com>
> > > > > 
> > > > > That doesn't mean I reject the whole idea. There are some details to
> > > > > sort out as mentioned elsewhere but you cannot really depend on
> > > > > pre-allocated pool which can fail at a fault time like that.
> > > > 
> > > > So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> > > > mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> > > > point in time the pages would get moved to the secretmem pool, and a
> > > > mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> > > > there are guarantees at fault time?
> > > 
> > > yes, reserve at mmap time and use during the fault. But this all sounds
> > > like a self inflicted problem to me. Sure you can have a pre-allocated
> > > or more dynamic pool to reduce the direct mapping fragmentation but you
> > > can always fall back to regular allocatios. In other ways have the pool
> > > as an optimization rather than a hard requirement. With a careful access
> > > control this sounds like a manageable solution to me.
> > 
> > I'd really wish we had this discussion for earlier spins of this series,
> > but since this didn't happen let's refresh the history a bit.
> 
> I am sorry but I am really fighting to find time to watch for all the
> moving targets...
> 
> > One of the major pushbacks on the first RFC [1] of the concept was about
> > the direct map fragmentation. I tried really hard to find data that shows
> > what is the performance difference with different page sizes in the direct
> > map and I didn't find anything.
> > 
> > So presuming that large pages do provide advantage the first implementation
> > of secretmem used PMD_ORDER allocations to amortise the effect of the
> > direct map fragmentation and then handed out 4k pages at each fault. In
> > addition there was an option to reserve a finite pool at boot time and
> > limit secretmem allocations only to that pool.
> > 
> > At some point David suggested to use CMA to improve overall flexibility
> > [3], so I switched secretmem to use CMA.
> > 
> > Now, with the data we have at hand (my benchmarks and Intel's report David
> > mentioned) I'm even not sure this whole pooling even required.
> 
> I would still like to understand whether that data is actually
> representative. With some underlying reasoning rather than I have run
> these XYZ benchmarks and numbers do not look terrible.

I would also very much like to see, for example, reasoning to enabling 1GB
pages in the direct map beyond "because we can" (commits 00d1c5e05736
("x86: add gbpages switches") and ef9257668e31 ("x86: do kernel direct
mapping at boot using GB pages")).

The original Kconfig text for CONFIG_DIRECT_GBPAGES said

          Enable gigabyte pages support (if the CPU supports it). This can
          improve the kernel's performance a tiny bit by reducing TLB
          pressure.

So it is very interesting how tiny that bit was.
 
> > I like the idea to have a pool as an optimization rather than a hard
> > requirement but I don't see why would it need a careful access control. As
> > the direct map fragmentation is not necessarily degrades the performance
> > (and even sometimes it actually improves it) and even then the degradation
> > is small, trying a PMD_ORDER allocation for a pool and then falling back to
> > 4K page may be just fine.
> 
> Well, as soon as this is a scarce resource then an access control seems
> like a first thing to think of. Maybe it is not really necessary but
> then this should be really justified.

And what being a scarce resource here? If we consider lack of the direct
map fragmentation as this resource, there enough measures secretmem
implements to limit user ability to fragment the direct map, as was already
discussed several times. Global limit, memcg and rlimit provide enough
access control already.
Michal Hocko Jan. 29, 2021, 8:23 a.m. UTC | #14
On Thu 28-01-21 13:05:02, James Bottomley wrote:
> Obviously the API choice could be revisited
> but do you have anything to add over the previous discussion, or is
> this just to get your access control?

Well, access control is certainly one thing which I still believe is
missing. But if there is a general agreement that the direct map
manipulation is not that critical then this will become much less of a
problem of course.

It all boils down whether secret memory is a scarce resource. With the
existing implementation it really is. It is effectivelly repeating
same design errors as hugetlb did. And look now, we have a subtle and
convoluted reservation code to track mmap requests and we have a cgroup
controller to, guess what, have at least some control over distribution
if the preallocated pool. See where am I coming from?

If the secret memory is more in line with mlock without any imposed
limit (other than available memory) in the end then, sure, using the same
access control as mlock sounds reasonable. Btw. if this is really
just a more restrictive mlock then is there any reason to not hook this
into the existing mlock infrastructure (e.g. MCL_EXCLUSIVE)?
Implications would be that direct map would be handled on instantiation/tear
down paths, migration would deal with the same (if possible). Other than
that it would be mlock like.
Michal Hocko Jan. 29, 2021, 8:51 a.m. UTC | #15
On Fri 29-01-21 09:21:28, Mike Rapoport wrote:
> On Thu, Jan 28, 2021 at 02:01:06PM +0100, Michal Hocko wrote:
> > On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
> > 
> > > And hugetlb pools may be also depleted by anybody by calling
> > > mmap(MAP_HUGETLB) and there is no any limiting knob for this, while
> > > secretmem has RLIMIT_MEMLOCK.
> > 
> > Yes it can fail. But it would fail at the mmap time when the reservation
> > fails. Not during the #PF time which can be at any time.
> 
> It may fail at $PF time as well:
> 
> hugetlb_fault()
>         hugeltb_no_page()
>                 ...
>                 alloc_huge_page()
>                         alloc_gigantic_page()
>                                 cma_alloc()
>                                         -ENOMEM; 

I would have to double check. From what I remember cma allocator is an
optimization to increase chances to allocate hugetlb pages when
overcommiting because pages should be normally pre-allocated in the pool
and reserved during mmap time. But even if a hugetlb page is not pre
allocated then this will get propagated as SIGBUS unless that has
changed.
  
> > > That said, simply replacing VM_FAULT_OOM with VM_FAULT_SIGBUS makes
> > > secretmem at least as controllable and robust than hugeltbfs even without
> > > complex reservation at mmap() time.
> > 
> > Still sucks huge!
>  
> Any #PF can get -ENOMEM for whatever reason. Sucks huge indeed.

I certainly can. But it doesn't in practice because most allocations
will simply not fail and rather invoke OOM killer directly. Maybe there
are cases which still might fail (higher order, weaker reclaim
capabilities etc) but that would result in a bug in the end because the
#PF handler would trigger the oom killer.

[...]
> > I would still like to understand whether that data is actually
> > representative. With some underlying reasoning rather than I have run
> > these XYZ benchmarks and numbers do not look terrible.
> 
> I would also very much like to see, for example, reasoning to enabling 1GB
> pages in the direct map beyond "because we can" (commits 00d1c5e05736
> ("x86: add gbpages switches") and ef9257668e31 ("x86: do kernel direct
> mapping at boot using GB pages")).
> 
> The original Kconfig text for CONFIG_DIRECT_GBPAGES said
> 
>           Enable gigabyte pages support (if the CPU supports it). This can
>           improve the kernel's performance a tiny bit by reducing TLB
>           pressure.
> 
> So it is very interesting how tiny that bit was.

Yeah and that sucks! Because it is leaving us with speculations now. I
hope you do not want to repeat the same mistake now and leave somebody
in the future in the same situation.

> > > I like the idea to have a pool as an optimization rather than a hard
> > > requirement but I don't see why would it need a careful access control. As
> > > the direct map fragmentation is not necessarily degrades the performance
> > > (and even sometimes it actually improves it) and even then the degradation
> > > is small, trying a PMD_ORDER allocation for a pool and then falling back to
> > > 4K page may be just fine.
> > 
> > Well, as soon as this is a scarce resource then an access control seems
> > like a first thing to think of. Maybe it is not really necessary but
> > then this should be really justified.
> 
> And what being a scarce resource here?

A fixed size pool shared by all users of this feature.

> If we consider lack of the direct
> map fragmentation as this resource, there enough measures secretmem
> implements to limit user ability to fragment the direct map, as was already
> discussed several times. Global limit, memcg and rlimit provide enough
> access control already.

Try to do a simple excercise. You have X amout of secret memory. How do
you distribute that to all interested users (some of them adversaries)
based on the above. Global limit is a DoS vector potentially, memcg is a
mixed bag of all other memory and it would become really tricky to
enforece proportion of the X while having other memory consumed and
rlimit is per process rather than per user.

Look at how hugetlb had to develop its cgroup controler to distribute
the pool among workloads. Then it has turned out that even reservations
have to be per workload. Quite a convoluted stuff evolved around that
feature because it turned out that the initial assumption that only few
users would be using the pool simply didn't pass the reality check.

As I've mentioned in other response to James. If the direct map
manipulation is not as big of a problem as most of us dogmatically
believed then things become much simpler. There is no need for global
pool and you are back to mlock kinda model.
James Bottomley Feb. 1, 2021, 4:56 p.m. UTC | #16
On Fri, 2021-01-29 at 09:23 +0100, Michal Hocko wrote:
> On Thu 28-01-21 13:05:02, James Bottomley wrote:
> > Obviously the API choice could be revisited
> > but do you have anything to add over the previous discussion, or is
> > this just to get your access control?
> 
> Well, access control is certainly one thing which I still believe is
> missing. But if there is a general agreement that the direct map
> manipulation is not that critical then this will become much less of
> a problem of course.

The secret memory is a scarce resource but it's not a facility that
should only be available to some users.

> It all boils down whether secret memory is a scarce resource. With
> the existing implementation it really is. It is effectivelly
> repeating same design errors as hugetlb did. And look now, we have a
> subtle and convoluted reservation code to track mmap requests and we
> have a cgroup controller to, guess what, have at least some control
> over distribution if the preallocated pool. See where am I coming
> from?

I'm fairly sure rlimit is the correct way to control this.  The
subtlety in both rlimit and memcg tracking comes from deciding to
account under an existing category rather than having our own new one. 
People don't like new stuff in accounting because it requires
modifications to everything in userspace.  Accounting under and
existing limit keeps userspace the same but leads to endless arguments
about which limit it should be under.  It took us several patch set
iterations to get to a fragile consensus on this which you're now
disrupting for reasons you're not making clear.

> If the secret memory is more in line with mlock without any imposed
> limit (other than available memory) in the end then, sure, using the
> same access control as mlock sounds reasonable. Btw. if this is
> really just a more restrictive mlock then is there any reason to not
> hook this into the existing mlock infrastructure (e.g.
> MCL_EXCLUSIVE)? Implications would be that direct map would be
> handled on instantiation/tear down paths, migration would deal with
> the same (if possible). Other than that it would be mlock like.

In the very first patch set we proposed a mmap flag to do this.  Under
detailed probing it emerged that this suffers from several design
problems: the KVM people want VMM to be able to remove the secret
memory range from the process; there may be situations where sharing is
useful and some people want to be able to seal the operations.  All of
this ended up convincing everyone that a file descriptor based approach
was better than a mmap one.

James
Michal Hocko Feb. 2, 2021, 9:35 a.m. UTC | #17
On Mon 01-02-21 08:56:19, James Bottomley wrote:
> On Fri, 2021-01-29 at 09:23 +0100, Michal Hocko wrote:
> > On Thu 28-01-21 13:05:02, James Bottomley wrote:
> > > Obviously the API choice could be revisited
> > > but do you have anything to add over the previous discussion, or is
> > > this just to get your access control?
> > 
> > Well, access control is certainly one thing which I still believe is
> > missing. But if there is a general agreement that the direct map
> > manipulation is not that critical then this will become much less of
> > a problem of course.
> 
> The secret memory is a scarce resource but it's not a facility that
> should only be available to some users.

How those two objectives go along? Or maybe our understanding of what
scrace really means here. If the pool of the secret memory is very limited
then you really need a way to stop one party from depriving others. More
on that below.

> > It all boils down whether secret memory is a scarce resource. With
> > the existing implementation it really is. It is effectivelly
> > repeating same design errors as hugetlb did. And look now, we have a
> > subtle and convoluted reservation code to track mmap requests and we
> > have a cgroup controller to, guess what, have at least some control
> > over distribution if the preallocated pool. See where am I coming
> > from?
> 
> I'm fairly sure rlimit is the correct way to control this.  The
> subtlety in both rlimit and memcg tracking comes from deciding to
> account under an existing category rather than having our own new one. 
> People don't like new stuff in accounting because it requires
> modifications to everything in userspace.  Accounting under and
> existing limit keeps userspace the same but leads to endless arguments
> about which limit it should be under.  It took us several patch set
> iterations to get to a fragile consensus on this which you're now
> disrupting for reasons you're not making clear.

I hoped I had made my points really clear. The existing scheme allows
one users (potentially adversary) to deplete the preallocated pool
and cause a shitstorm of OOM killer because there is no real way to
replenish the pool from the oom killer other than randomly keep killing
tasks until one happens to release its secret memory back to the
pool. Is that more clear now?

And no, rlimit and memcg limit will not save you from that because the
former is per process and later is hard to manage under a single limit
which might be order of magnitude larger than the secret memory pool
size. See the point?

I have also proposed potential ways out of this. Either the pool is not
fixed sized and you make it a regular unevictable memory (if direct map
fragmentation is not considered a major problem) or you need a careful
access control or you need SIGBUS on the mmap failure (to allow at least
some fallback mode to caller).

I do not see any other way around it. I might be missing some other
ways but so far I keep hearing that the existing scheme is just fine
because this has been discussed in the past and you have agreed it is
ok. Without any specifics...

Please keep in mind this is a user interface and it is due to careful
scrutiny. So rather than pushing back with "you are disrupting a
consensus" kinda feedback, please try to stay technical.

> > If the secret memory is more in line with mlock without any imposed
> > limit (other than available memory) in the end then, sure, using the
> > same access control as mlock sounds reasonable. Btw. if this is
> > really just a more restrictive mlock then is there any reason to not
> > hook this into the existing mlock infrastructure (e.g.
> > MCL_EXCLUSIVE)? Implications would be that direct map would be
> > handled on instantiation/tear down paths, migration would deal with
> > the same (if possible). Other than that it would be mlock like.
> 
> In the very first patch set we proposed a mmap flag to do this.  Under
> detailed probing it emerged that this suffers from several design
> problems: the KVM people want VMM to be able to remove the secret
> memory range from the process; there may be situations where sharing is
> useful and some people want to be able to seal the operations.  All of
> this ended up convincing everyone that a file descriptor based approach
> was better than a mmap one.

OK, fair enough. This belongs to the changelog IMHO. It is good to know
why existing interfaces do not match the need.
Mike Rapoport Feb. 2, 2021, 12:48 p.m. UTC | #18
On Tue, Feb 02, 2021 at 10:35:05AM +0100, Michal Hocko wrote:
> On Mon 01-02-21 08:56:19, James Bottomley wrote:
> 
> I have also proposed potential ways out of this. Either the pool is not
> fixed sized and you make it a regular unevictable memory (if direct map
> fragmentation is not considered a major problem)

I think that the direct map fragmentation is not a major problem, and the
data we have confirms it, so I'd be more than happy to entirely drop the
pool, allocate memory page by page and remove each page from the direct
map. 

Still, we cannot prove negative and it could happen that there is a
workload that would suffer a lot from the direct map fragmentation, so
having a pool of large pages upfront is better than trying to fix it
afterwards. As we get more confidence that the direct map fragmentation is
not an issue as it is common to believe we may remove the pool altogether.

I think that using PMD_ORDER allocations for the pool with a fallback to
order 0 will do the job, but unfortunately I doubt we'll reach a consensus
about this because dogmatic beliefs are hard to shake...

A more restrictive possibility is to still use plain PMD_ORDER allocations
to fill the pool, without relying on CMA. In this case there will be no
global secretmem specific pool to exhaust, but then it's possible to drain
high order free blocks in a system, so CMA has an advantage of limiting
secretmem pools to certain amount of memory with somewhat higher
probability for high order allocation to succeed. 

> or you need a careful access control 

Do you mind elaborating what do you mean by "careful access control"?

> or you need SIGBUS on the mmap failure (to allow at least some fallback
> mode to caller).

As I've already said, I agree that SIGBUS is way better than OOM at #PF
time.
And we can add some means to fail at mmap() time if the pools are running
low.
David Hildenbrand Feb. 2, 2021, 1:14 p.m. UTC | #19
On 02.02.21 13:48, Mike Rapoport wrote:
> On Tue, Feb 02, 2021 at 10:35:05AM +0100, Michal Hocko wrote:
>> On Mon 01-02-21 08:56:19, James Bottomley wrote:
>>
>> I have also proposed potential ways out of this. Either the pool is not
>> fixed sized and you make it a regular unevictable memory (if direct map
>> fragmentation is not considered a major problem)
> 
> I think that the direct map fragmentation is not a major problem, and the
> data we have confirms it, so I'd be more than happy to entirely drop the
> pool, allocate memory page by page and remove each page from the direct
> map.
> 
> Still, we cannot prove negative and it could happen that there is a
> workload that would suffer a lot from the direct map fragmentation, so
> having a pool of large pages upfront is better than trying to fix it
> afterwards. As we get more confidence that the direct map fragmentation is
> not an issue as it is common to believe we may remove the pool altogether.
> 
> I think that using PMD_ORDER allocations for the pool with a fallback to
> order 0 will do the job, but unfortunately I doubt we'll reach a consensus
> about this because dogmatic beliefs are hard to shake...
> 
> A more restrictive possibility is to still use plain PMD_ORDER allocations
> to fill the pool, without relying on CMA. In this case there will be no
> global secretmem specific pool to exhaust, but then it's possible to drain
> high order free blocks in a system, so CMA has an advantage of limiting
> secretmem pools to certain amount of memory with somewhat higher
> probability for high order allocation to succeed.

I am not really concerned about fragmenting/breaking up the direct map 
as long as the feature has to be explicitly enabled (similar to 
fragmenting the vmemmap).

As already expressed, I dislike allowing user space to consume an 
unlimited number unmovable/unmigratable allocations. We already have 
that in some cases with huge pages (when the arch does not support 
migration) - but there we can at least manage the consumption using the 
whole max/reserved/free/... infrastructure. In addition, adding arch 
support for migration shouldn't be too complicated.

The idea of using CMA is quite good IMHO, because there we can locally 
limit the direct map fragmentation and don't have to bother about 
migration at all. We own the area, so we can place as many unmovable 
allocations on it as we can fit.

But it sounds like, we would also need some kind of reservation 
mechanism in either scenario (CMA vs. no CMA).

If we don't want to go full-circle on max/reserved/free/..., allowing 
for migration of secretmem pages would make sense. Then, these pages 
become "less special". Map source, copy, unmap destination. The security 
implementations are the ugly part. I wonder if we could temporarily map 
somewhere else, so avoiding to touch the direct map during migration.
Michal Hocko Feb. 2, 2021, 1:27 p.m. UTC | #20
On Tue 02-02-21 14:48:57, Mike Rapoport wrote:
> On Tue, Feb 02, 2021 at 10:35:05AM +0100, Michal Hocko wrote:
> > On Mon 01-02-21 08:56:19, James Bottomley wrote:
> > 
> > I have also proposed potential ways out of this. Either the pool is not
> > fixed sized and you make it a regular unevictable memory (if direct map
> > fragmentation is not considered a major problem)
> 
> I think that the direct map fragmentation is not a major problem, and the
> data we have confirms it, so I'd be more than happy to entirely drop the
> pool, allocate memory page by page and remove each page from the direct
> map. 
> 
> Still, we cannot prove negative and it could happen that there is a
> workload that would suffer a lot from the direct map fragmentation, so
> having a pool of large pages upfront is better than trying to fix it
> afterwards. As we get more confidence that the direct map fragmentation is
> not an issue as it is common to believe we may remove the pool altogether.

I would drop the pool altogether and instantiate pages to the
unevictable LRU list and internally treat it as ramdisk/mlock so you
will get an accounting correctly. The feature should be still opt-in
(e.g. a kernel command line parameter) for now. The recent report by
Intel (http://lkml.kernel.org/r/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com)
there is no clear win to have huge mappings in _general_ but there are
still workloads which benefit. 
 
> I think that using PMD_ORDER allocations for the pool with a fallback to
> order 0 will do the job, but unfortunately I doubt we'll reach a consensus
> about this because dogmatic beliefs are hard to shake...

If this is opt-in then those beliefs can be relaxed somehow. Long term
it makes a lot of sense to optimize for a better direct map management
but I do not think this is a hard requirement for an initial
implementation if it is not imposed to everybody by default.

> A more restrictive possibility is to still use plain PMD_ORDER allocations
> to fill the pool, without relying on CMA. In this case there will be no
> global secretmem specific pool to exhaust, but then it's possible to drain
> high order free blocks in a system, so CMA has an advantage of limiting
> secretmem pools to certain amount of memory with somewhat higher
> probability for high order allocation to succeed. 
> 
> > or you need a careful access control 
> 
> Do you mind elaborating what do you mean by "careful access control"?

As already mentioned, a mechanism to control who can use this feature -
e.g. make it a special device which you can access control by
permissions or higher level security policies. But that is really needed
only if the pool is fixed sized.
 
> > or you need SIGBUS on the mmap failure (to allow at least some fallback
> > mode to caller).
> 
> As I've already said, I agree that SIGBUS is way better than OOM at #PF
> time.

It would be better than OOM but it would still be a terrible interface.
So I would go that path only as a last resort. I do not even want to
think what kind of security consequences that would have. E.g. think of
somebody depleting the pool and pushing security sensitive workload into
fallback which is not backed by security memory.

> And we can add some means to fail at mmap() time if the pools are running
> low.

Welcome to hugetlb reservation world...
Michal Hocko Feb. 2, 2021, 1:32 p.m. UTC | #21
On Tue 02-02-21 14:14:09, David Hildenbrand wrote:
[...]
> As already expressed, I dislike allowing user space to consume an unlimited
> number unmovable/unmigratable allocations. We already have that in some
> cases with huge pages (when the arch does not support migration) - but there
> we can at least manage the consumption using the whole max/reserved/free/...
> infrastructure. In addition, adding arch support for migration shouldn't be
> too complicated.

Well, mlock is not too different here as well. Hugepages are arguably an
easier model because it requires an explicit pre-configuration by an
admin. Mlock doesn't have anything like that. Please also note that
while mlock pages are migrateable by default, this is not the case in
general because they can be configured to disalow migration to prevent
from minor page faults as some workloads require that (e.g. RT).
Another example is ramdisk or even tmpfs (with swap storage depleted or
not configured). Both are PITA from the OOM POV but they are manageable
if people are careful. If secretmem behaves along those existing models
then we know what to expect at least.
David Hildenbrand Feb. 2, 2021, 2:12 p.m. UTC | #22
On 02.02.21 14:32, Michal Hocko wrote:
> On Tue 02-02-21 14:14:09, David Hildenbrand wrote:
> [...]
>> As already expressed, I dislike allowing user space to consume an unlimited
>> number unmovable/unmigratable allocations. We already have that in some
>> cases with huge pages (when the arch does not support migration) - but there
>> we can at least manage the consumption using the whole max/reserved/free/...
>> infrastructure. In addition, adding arch support for migration shouldn't be
>> too complicated.
> 
> Well, mlock is not too different here as well. Hugepages are arguably an
> easier model because it requires an explicit pre-configuration by an
> admin. Mlock doesn't have anything like that. Please also note that
> while mlock pages are migrateable by default, this is not the case in
> general because they can be configured to disalow migration to prevent
> from minor page faults as some workloads require that (e.g. RT).

Yeah, however that is a very special case. In most cases mlock() simply 
prevents swapping, you still have movable pages you can place anywhere 
you like (including on ZONE_MOVABLE).

> Another example is ramdisk or even tmpfs (with swap storage depleted or
> not configured). Both are PITA from the OOM POV but they are manageable
> if people are careful.

Right, but again, special cases - e.g., tmpfs explicitly has to be resized.

> If secretmem behaves along those existing models
> then we know what to expect at least.

I think secretmem behaves much more like longterm GUP right now 
("unmigratable", "lifetime controlled by user space", "cannot go on 
CMA/ZONE_MOVABLE"). I'd either want to reasonably well control/limit it 
or make it behave more like mlocked pages.
Michal Hocko Feb. 2, 2021, 2:22 p.m. UTC | #23
On Tue 02-02-21 15:12:21, David Hildenbrand wrote:
[...]
> I think secretmem behaves much more like longterm GUP right now
> ("unmigratable", "lifetime controlled by user space", "cannot go on
> CMA/ZONE_MOVABLE"). I'd either want to reasonably well control/limit it or
> make it behave more like mlocked pages.

I thought I have already asked but I must have forgotten. Is there any
actual reason why the memory is not movable? Timing attacks?
David Hildenbrand Feb. 2, 2021, 2:26 p.m. UTC | #24
On 02.02.21 15:22, Michal Hocko wrote:
> On Tue 02-02-21 15:12:21, David Hildenbrand wrote:
> [...]
>> I think secretmem behaves much more like longterm GUP right now
>> ("unmigratable", "lifetime controlled by user space", "cannot go on
>> CMA/ZONE_MOVABLE"). I'd either want to reasonably well control/limit it or
>> make it behave more like mlocked pages.
> 
> I thought I have already asked but I must have forgotten. Is there any
> actual reason why the memory is not movable? Timing attacks?

I think the reason is simple: no direct map, no copying of memory.

As I mentioned, we would have to temporarily map in order to copy. 
Mapping it somewhere else (like kmap), outside of the direct map might 
reduce possible attacks.
Michal Hocko Feb. 2, 2021, 2:32 p.m. UTC | #25
On Tue 02-02-21 15:26:20, David Hildenbrand wrote:
> On 02.02.21 15:22, Michal Hocko wrote:
> > On Tue 02-02-21 15:12:21, David Hildenbrand wrote:
> > [...]
> > > I think secretmem behaves much more like longterm GUP right now
> > > ("unmigratable", "lifetime controlled by user space", "cannot go on
> > > CMA/ZONE_MOVABLE"). I'd either want to reasonably well control/limit it or
> > > make it behave more like mlocked pages.
> > 
> > I thought I have already asked but I must have forgotten. Is there any
> > actual reason why the memory is not movable? Timing attacks?
> 
> I think the reason is simple: no direct map, no copying of memory.

This is an implementation detail though and not something terribly hard
to add on top later on. I was more worried there would be really
fundamental reason why this is not possible. E.g. security implications.
David Hildenbrand Feb. 2, 2021, 2:34 p.m. UTC | #26
On 02.02.21 15:32, Michal Hocko wrote:
> On Tue 02-02-21 15:26:20, David Hildenbrand wrote:
>> On 02.02.21 15:22, Michal Hocko wrote:
>>> On Tue 02-02-21 15:12:21, David Hildenbrand wrote:
>>> [...]
>>>> I think secretmem behaves much more like longterm GUP right now
>>>> ("unmigratable", "lifetime controlled by user space", "cannot go on
>>>> CMA/ZONE_MOVABLE"). I'd either want to reasonably well control/limit it or
>>>> make it behave more like mlocked pages.
>>>
>>> I thought I have already asked but I must have forgotten. Is there any
>>> actual reason why the memory is not movable? Timing attacks?
>>
>> I think the reason is simple: no direct map, no copying of memory.
> 
> This is an implementation detail though and not something terribly hard
> to add on top later on. I was more worried there would be really
> fundamental reason why this is not possible. E.g. security implications.

I don't remember all the details. Let's see what Mike thinks regarding 
migration (e.g., security concerns).
David Hildenbrand Feb. 2, 2021, 2:42 p.m. UTC | #27
On 29.01.21 09:51, Michal Hocko wrote:
> On Fri 29-01-21 09:21:28, Mike Rapoport wrote:
>> On Thu, Jan 28, 2021 at 02:01:06PM +0100, Michal Hocko wrote:
>>> On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
>>>
>>>> And hugetlb pools may be also depleted by anybody by calling
>>>> mmap(MAP_HUGETLB) and there is no any limiting knob for this, while
>>>> secretmem has RLIMIT_MEMLOCK.
>>>
>>> Yes it can fail. But it would fail at the mmap time when the reservation
>>> fails. Not during the #PF time which can be at any time.
>>
>> It may fail at $PF time as well:
>>
>> hugetlb_fault()
>>          hugeltb_no_page()
>>                  ...
>>                  alloc_huge_page()
>>                          alloc_gigantic_page()
>>                                  cma_alloc()
>>                                          -ENOMEM;
> 
> I would have to double check. From what I remember cma allocator is an
> optimization to increase chances to allocate hugetlb pages when
> overcommiting because pages should be normally pre-allocated in the pool
> and reserved during mmap time. But even if a hugetlb page is not pre
> allocated then this will get propagated as SIGBUS unless that has
> changed.

It's an optimization to allocate gigantic pages dynamically later (so 
not using memblock during boot). Not just for overcommit, but for any 
kind of allocation.

The actual allocation from cma should happen when setting nr_pages:

nr_hugepages_store_common()->set_max_huge_pages()->alloc_pool_huge_page()...->alloc_gigantic_page()

The path described above seems to be trying to overcommit gigantic 
pages, something that can be expected to SIGBUS. Reservations are 
handled via the pre-allocated pool.
Mike Rapoport Feb. 2, 2021, 6:15 p.m. UTC | #28
On Tue, Feb 02, 2021 at 03:34:29PM +0100, David Hildenbrand wrote:
> On 02.02.21 15:32, Michal Hocko wrote:
> > On Tue 02-02-21 15:26:20, David Hildenbrand wrote:
> > > On 02.02.21 15:22, Michal Hocko wrote:
> > > > On Tue 02-02-21 15:12:21, David Hildenbrand wrote:
> > > > [...]
> > > > > I think secretmem behaves much more like longterm GUP right now
> > > > > ("unmigratable", "lifetime controlled by user space", "cannot go on
> > > > > CMA/ZONE_MOVABLE"). I'd either want to reasonably well control/limit it or
> > > > > make it behave more like mlocked pages.
> > > > 
> > > > I thought I have already asked but I must have forgotten. Is there any
> > > > actual reason why the memory is not movable? Timing attacks?
> > > 
> > > I think the reason is simple: no direct map, no copying of memory.
> > 
> > This is an implementation detail though and not something terribly hard
> > to add on top later on. I was more worried there would be really
> > fundamental reason why this is not possible. E.g. security implications.
> 
> I don't remember all the details. Let's see what Mike thinks regarding
> migration (e.g., security concerns).

Thanks for considering me a security expert :-)

Yet, I cannot estimate how dangerous is the temporal exposure of
this data to the kernel via the direct map in the simple map/copy/unmap
sequence.

More secure way would be to map source and destination in a different page table
rather than in the direct map, similarly to the way text_poke() on x86
does.

I've left the migration callback empty for now because it can be added on
top and its implementation would depend on the way we do (or do not do)
pooling.
James Bottomley Feb. 2, 2021, 6:55 p.m. UTC | #29
On Tue, 2021-02-02 at 20:15 +0200, Mike Rapoport wrote:
> On Tue, Feb 02, 2021 at 03:34:29PM +0100, David Hildenbrand wrote:
> > On 02.02.21 15:32, Michal Hocko wrote:
> > > On Tue 02-02-21 15:26:20, David Hildenbrand wrote:
> > > > On 02.02.21 15:22, Michal Hocko wrote:
> > > > > On Tue 02-02-21 15:12:21, David Hildenbrand wrote:
> > > > > [...]
> > > > > > I think secretmem behaves much more like longterm GUP right
> > > > > > now
> > > > > > ("unmigratable", "lifetime controlled by user space",
> > > > > > "cannot go on
> > > > > > CMA/ZONE_MOVABLE"). I'd either want to reasonably well
> > > > > > control/limit it or
> > > > > > make it behave more like mlocked pages.
> > > > > 
> > > > > I thought I have already asked but I must have forgotten. Is
> > > > > there any
> > > > > actual reason why the memory is not movable? Timing attacks?
> > > > 
> > > > I think the reason is simple: no direct map, no copying of
> > > > memory.
> > > 
> > > This is an implementation detail though and not something
> > > terribly hard
> > > to add on top later on. I was more worried there would be really
> > > fundamental reason why this is not possible. E.g. security
> > > implications.
> > 
> > I don't remember all the details. Let's see what Mike thinks
> > regarding
> > migration (e.g., security concerns).
> 
> Thanks for considering me a security expert :-)
> 
> Yet, I cannot estimate how dangerous is the temporal exposure of
> this data to the kernel via the direct map in the simple
> map/copy/unmap
> sequence.

Well the safest security statement is that we never expose the data to
the kernel because it's a very clean security statement and easy to
enforce.  It's also the easiest threat model to analyse.   Once we do
start exposing the secret to the kernel it alters the threat profile
and the analysis and obviously potentially provides the ROP gadget to
an attacker to do the same.  Instinct tells me that the loss of
security doesn't really make up for the ability to swap or migrate but
if there were a case for doing the latter, it would have to be a
security policy of the user (i.e. a user should be able to decide their
data is too sensitive to expose to the kernel).

> More secure way would be to map source and destination in a different
> page table rather than in the direct map, similarly to the way
> text_poke() on x86 does.

I think doing this would have much less of an impact on the security
posture because it's already theoretically possible to have kmap
restore access to the kernel.

James


> I've left the migration callback empty for now because it can be
> added on top and its implementation would depend on the way we do (or
> do not do) pooling.
>
Mike Rapoport Feb. 2, 2021, 7:10 p.m. UTC | #30
On Tue, Feb 02, 2021 at 02:27:14PM +0100, Michal Hocko wrote:
> On Tue 02-02-21 14:48:57, Mike Rapoport wrote:
> > On Tue, Feb 02, 2021 at 10:35:05AM +0100, Michal Hocko wrote:
> > > On Mon 01-02-21 08:56:19, James Bottomley wrote:
> > > 
> > > I have also proposed potential ways out of this. Either the pool is not
> > > fixed sized and you make it a regular unevictable memory (if direct map
> > > fragmentation is not considered a major problem)
> > 
> > I think that the direct map fragmentation is not a major problem, and the
> > data we have confirms it, so I'd be more than happy to entirely drop the
> > pool, allocate memory page by page and remove each page from the direct
> > map. 
> > 
> > Still, we cannot prove negative and it could happen that there is a
> > workload that would suffer a lot from the direct map fragmentation, so
> > having a pool of large pages upfront is better than trying to fix it
> > afterwards. As we get more confidence that the direct map fragmentation is
> > not an issue as it is common to believe we may remove the pool altogether.
> 
> I would drop the pool altogether and instantiate pages to the
> unevictable LRU list and internally treat it as ramdisk/mlock so you
> will get an accounting correctly. The feature should be still opt-in
> (e.g. a kernel command line parameter) for now. The recent report by
> Intel (http://lkml.kernel.org/r/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com)
> there is no clear win to have huge mappings in _general_ but there are
> still workloads which benefit. 
>  
> > I think that using PMD_ORDER allocations for the pool with a fallback to
> > order 0 will do the job, but unfortunately I doubt we'll reach a consensus
> > about this because dogmatic beliefs are hard to shake...
> 
> If this is opt-in then those beliefs can be relaxed somehow. Long term
> it makes a lot of sense to optimize for a better direct map management
> but I do not think this is a hard requirement for an initial
> implementation if it is not imposed to everybody by default.
>
> > A more restrictive possibility is to still use plain PMD_ORDER allocations
> > to fill the pool, without relying on CMA. In this case there will be no
> > global secretmem specific pool to exhaust, but then it's possible to drain
> > high order free blocks in a system, so CMA has an advantage of limiting
> > secretmem pools to certain amount of memory with somewhat higher
> > probability for high order allocation to succeed. 
> > 
> > > or you need a careful access control 
> > 
> > Do you mind elaborating what do you mean by "careful access control"?
> 
> As already mentioned, a mechanism to control who can use this feature -
> e.g. make it a special device which you can access control by
> permissions or higher level security policies. But that is really needed
> only if the pool is fixed sized.
  
Let me reiterate to make sure I don't misread your suggestion.

If we make secretmem an opt-in feature with, e.g. kernel parameter, the
pooling of large pages is unnecessary. In this case there is no limited
resource we need to protect because secretmem will allocate page by page.

Since there is no limited resource, we don't need special permissions
to access secretmem so we can move forward with a system call that creates
a mmapable file descriptor and save the hassle of a chardev.

I cannot say I don't like this as it cuts roughly half of mm/secretmem.c :)

But I must say I am still a bit concerned about that we have no provisions
here for dealing with the direct map fragmentation even with the set goal
to improve the direct map management in the long run...
Michal Hocko Feb. 3, 2021, 9:12 a.m. UTC | #31
On Tue 02-02-21 21:10:40, Mike Rapoport wrote:
> On Tue, Feb 02, 2021 at 02:27:14PM +0100, Michal Hocko wrote:
> > On Tue 02-02-21 14:48:57, Mike Rapoport wrote:
> > > On Tue, Feb 02, 2021 at 10:35:05AM +0100, Michal Hocko wrote:
> > > > On Mon 01-02-21 08:56:19, James Bottomley wrote:
> > > > 
> > > > I have also proposed potential ways out of this. Either the pool is not
> > > > fixed sized and you make it a regular unevictable memory (if direct map
> > > > fragmentation is not considered a major problem)
> > > 
> > > I think that the direct map fragmentation is not a major problem, and the
> > > data we have confirms it, so I'd be more than happy to entirely drop the
> > > pool, allocate memory page by page and remove each page from the direct
> > > map. 
> > > 
> > > Still, we cannot prove negative and it could happen that there is a
> > > workload that would suffer a lot from the direct map fragmentation, so
> > > having a pool of large pages upfront is better than trying to fix it
> > > afterwards. As we get more confidence that the direct map fragmentation is
> > > not an issue as it is common to believe we may remove the pool altogether.
> > 
> > I would drop the pool altogether and instantiate pages to the
> > unevictable LRU list and internally treat it as ramdisk/mlock so you
> > will get an accounting correctly. The feature should be still opt-in
> > (e.g. a kernel command line parameter) for now. The recent report by
> > Intel (http://lkml.kernel.org/r/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com)
> > there is no clear win to have huge mappings in _general_ but there are
> > still workloads which benefit. 
> >  
> > > I think that using PMD_ORDER allocations for the pool with a fallback to
> > > order 0 will do the job, but unfortunately I doubt we'll reach a consensus
> > > about this because dogmatic beliefs are hard to shake...
> > 
> > If this is opt-in then those beliefs can be relaxed somehow. Long term
> > it makes a lot of sense to optimize for a better direct map management
> > but I do not think this is a hard requirement for an initial
> > implementation if it is not imposed to everybody by default.
> >
> > > A more restrictive possibility is to still use plain PMD_ORDER allocations
> > > to fill the pool, without relying on CMA. In this case there will be no
> > > global secretmem specific pool to exhaust, but then it's possible to drain
> > > high order free blocks in a system, so CMA has an advantage of limiting
> > > secretmem pools to certain amount of memory with somewhat higher
> > > probability for high order allocation to succeed. 
> > > 
> > > > or you need a careful access control 
> > > 
> > > Do you mind elaborating what do you mean by "careful access control"?
> > 
> > As already mentioned, a mechanism to control who can use this feature -
> > e.g. make it a special device which you can access control by
> > permissions or higher level security policies. But that is really needed
> > only if the pool is fixed sized.
>   
> Let me reiterate to make sure I don't misread your suggestion.
> 
> If we make secretmem an opt-in feature with, e.g. kernel parameter, the
> pooling of large pages is unnecessary. In this case there is no limited
> resource we need to protect because secretmem will allocate page by page.

Yes.

> Since there is no limited resource, we don't need special permissions
> to access secretmem so we can move forward with a system call that creates
> a mmapable file descriptor and save the hassle of a chardev.

Yes, I assume you implicitly assume mlock rlimit here. Also memcg
accounting should be in place. Wrt to the specific syscall, please
document why existing interfaces are not a good fit as well. It would be
also great to describe interaction with mlock itself (I assume the two
to be incompatible - mlock will fail on and mlockall will ignore it).

> I cannot say I don't like this as it cuts roughly half of mm/secretmem.c :)
> 
> But I must say I am still a bit concerned about that we have no provisions
> here for dealing with the direct map fragmentation even with the set goal
> to improve the direct map management in the long run...

Yes that is something that will be needed long term. I do not think this
is strictly necessary for the initial submission, though. The
implementation should be as simple as possible now and complexity added
on top.
Michal Hocko Feb. 3, 2021, 12:09 p.m. UTC | #32
On Tue 02-02-21 10:55:40, James Bottomley wrote:
> On Tue, 2021-02-02 at 20:15 +0200, Mike Rapoport wrote:
> > On Tue, Feb 02, 2021 at 03:34:29PM +0100, David Hildenbrand wrote:
> > > On 02.02.21 15:32, Michal Hocko wrote:
> > > > On Tue 02-02-21 15:26:20, David Hildenbrand wrote:
> > > > > On 02.02.21 15:22, Michal Hocko wrote:
> > > > > > On Tue 02-02-21 15:12:21, David Hildenbrand wrote:
> > > > > > [...]
> > > > > > > I think secretmem behaves much more like longterm GUP right
> > > > > > > now
> > > > > > > ("unmigratable", "lifetime controlled by user space",
> > > > > > > "cannot go on
> > > > > > > CMA/ZONE_MOVABLE"). I'd either want to reasonably well
> > > > > > > control/limit it or
> > > > > > > make it behave more like mlocked pages.
> > > > > > 
> > > > > > I thought I have already asked but I must have forgotten. Is
> > > > > > there any
> > > > > > actual reason why the memory is not movable? Timing attacks?
> > > > > 
> > > > > I think the reason is simple: no direct map, no copying of
> > > > > memory.
> > > > 
> > > > This is an implementation detail though and not something
> > > > terribly hard
> > > > to add on top later on. I was more worried there would be really
> > > > fundamental reason why this is not possible. E.g. security
> > > > implications.
> > > 
> > > I don't remember all the details. Let's see what Mike thinks
> > > regarding
> > > migration (e.g., security concerns).
> > 
> > Thanks for considering me a security expert :-)
> > 
> > Yet, I cannot estimate how dangerous is the temporal exposure of
> > this data to the kernel via the direct map in the simple
> > map/copy/unmap
> > sequence.
> 
> Well the safest security statement is that we never expose the data to
> the kernel because it's a very clean security statement and easy to
> enforce. It's also the easiest threat model to analyse.   Once we do
> start exposing the secret to the kernel it alters the threat profile
> and the analysis and obviously potentially provides the ROP gadget to
> an attacker to do the same. Instinct tells me that the loss of
> security doesn't really make up for the ability to swap or migrate but
> if there were a case for doing the latter, it would have to be a
> security policy of the user (i.e. a user should be able to decide their
> data is too sensitive to expose to the kernel).

The security/threat model should be documented in the changelog as
well. I am not a security expert but I would tend to agree that not
allowing even temporal mapping for data copying (in the kernel) is the
most robust approach. Whether that is generally necessary for users I do
not know.

From the API POV I think it makes sense to have two
modes. NEVER_MAP_IN_KERNEL which would imply no migrateability, no
copy_{from,to}_user, no gup or any other way for the kernel to access
content of the memory. Maybe even zero the content on the last unmap to
never allow any data leak. ALLOW_TEMPORARY would unmap the page from
the direct mapping but it would still allow temporary mappings for
data copying inside the kernel (thus allow CoW, copy*user, migration).
Which one should be default and which an opt-in I do not know. A less
restrictive mode to be default and the more restrictive an opt-in via
flags makes a lot of sense to me though.
Mike Rapoport Feb. 4, 2021, 9:58 a.m. UTC | #33
On Wed, Feb 03, 2021 at 10:12:22AM +0100, Michal Hocko wrote:
> On Tue 02-02-21 21:10:40, Mike Rapoport wrote:
> >   
> > Let me reiterate to make sure I don't misread your suggestion.
> > 
> > If we make secretmem an opt-in feature with, e.g. kernel parameter, the
> > pooling of large pages is unnecessary. In this case there is no limited
> > resource we need to protect because secretmem will allocate page by page.
> 
> Yes.
> 
> > Since there is no limited resource, we don't need special permissions
> > to access secretmem so we can move forward with a system call that creates
> > a mmapable file descriptor and save the hassle of a chardev.
> 
> Yes, I assume you implicitly assume mlock rlimit here.

Yes.

> Also memcg accounting should be in place. 

Right, without pools memcg accounting is no different from other
unevictable files.

> Wrt to the specific syscall, please document why existing interfaces are
> not a good fit as well. It would be also great to describe interaction
> with mlock itself (I assume the two to be incompatible - mlock will fail
> on and mlockall will ignore it).

The interaction with mlock() belongs more to the man page, but I don't mind
adding this to changelog as well.
Mike Rapoport Feb. 4, 2021, 11:31 a.m. UTC | #34
On Wed, Feb 03, 2021 at 01:09:30PM +0100, Michal Hocko wrote:
> On Tue 02-02-21 10:55:40, James Bottomley wrote:
> > On Tue, 2021-02-02 at 20:15 +0200, Mike Rapoport wrote:
> > > On Tue, Feb 02, 2021 at 03:34:29PM +0100, David Hildenbrand wrote:
> > > > On 02.02.21 15:32, Michal Hocko wrote:
> > 
> > Well the safest security statement is that we never expose the data to
> > the kernel because it's a very clean security statement and easy to
> > enforce. It's also the easiest threat model to analyse.   Once we do
> > start exposing the secret to the kernel it alters the threat profile
> > and the analysis and obviously potentially provides the ROP gadget to
> > an attacker to do the same. Instinct tells me that the loss of
> > security doesn't really make up for the ability to swap or migrate but
> > if there were a case for doing the latter, it would have to be a
> > security policy of the user (i.e. a user should be able to decide their
> > data is too sensitive to expose to the kernel).
> 
> The security/threat model should be documented in the changelog as
> well. I am not a security expert but I would tend to agree that not
> allowing even temporal mapping for data copying (in the kernel) is the
> most robust approach. Whether that is generally necessary for users I do
> not know.
> 
> From the API POV I think it makes sense to have two
> modes. NEVER_MAP_IN_KERNEL which would imply no migrateability, no
> copy_{from,to}_user, no gup or any other way for the kernel to access
> content of the memory. Maybe even zero the content on the last unmap to
> never allow any data leak. ALLOW_TEMPORARY would unmap the page from
> the direct mapping but it would still allow temporary mappings for
> data copying inside the kernel (thus allow CoW, copy*user, migration).
> Which one should be default and which an opt-in I do not know. A less
> restrictive mode to be default and the more restrictive an opt-in via
> flags makes a lot of sense to me though.

The default is already NEVER_MAP_IN_KERNEL, so there is no explicit flag
for this. ALLOW_TEMPORARY should be opt-in, IMHO, and we can add it on top
later on.
Michal Hocko Feb. 4, 2021, 1:02 p.m. UTC | #35
On Thu 04-02-21 11:58:55, Mike Rapoport wrote:
> On Wed, Feb 03, 2021 at 10:12:22AM +0100, Michal Hocko wrote:
[...]
> > Wrt to the specific syscall, please document why existing interfaces are
> > not a good fit as well. It would be also great to describe interaction
> > with mlock itself (I assume the two to be incompatible - mlock will fail
> > on and mlockall will ignore it).
> 
> The interaction with mlock() belongs more to the man page, but I don't mind
> adding this to changelog as well.

I would expect this to be explicitly handled in the patch - thus the
changelog rationale.
diff mbox series

Patch

diff --git a/mm/Kconfig b/mm/Kconfig
index 5f8243442f66..ec35bf406439 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -874,5 +874,7 @@  config KMAP_LOCAL
 
 config SECRETMEM
 	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+	select GENERIC_ALLOCATOR
+	select CMA
 
 endmenu
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 904351d12c33..469211c7cc3a 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -7,12 +7,15 @@ 
 
 #include <linux/mm.h>
 #include <linux/fs.h>
+#include <linux/cma.h>
 #include <linux/mount.h>
 #include <linux/memfd.h>
 #include <linux/bitops.h>
 #include <linux/printk.h>
 #include <linux/pagemap.h>
+#include <linux/genalloc.h>
 #include <linux/syscalls.h>
+#include <linux/memblock.h>
 #include <linux/pseudo_fs.h>
 #include <linux/secretmem.h>
 #include <linux/set_memory.h>
@@ -35,24 +38,94 @@ 
 #define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
 
 struct secretmem_ctx {
+	struct gen_pool *pool;
 	unsigned int mode;
 };
 
-static struct page *secretmem_alloc_page(gfp_t gfp)
+static struct cma *secretmem_cma;
+
+static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 {
+	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
+	struct gen_pool *pool = ctx->pool;
+	unsigned long addr;
+	struct page *page;
+	int i, err;
+
+	page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
+	if (!page)
+		return -ENOMEM;
+
 	/*
-	 * FIXME: use a cache of large pages to reduce the direct map
-	 * fragmentation
+	 * clear the data left from the prevoius user before dropping the
+	 * pages from the direct map
 	 */
-	return alloc_page(gfp | __GFP_ZERO);
+	for (i = 0; i < nr_pages; i++)
+		clear_highpage(page + i);
+
+	err = set_direct_map_invalid_noflush(page, nr_pages);
+	if (err)
+		goto err_cma_release;
+
+	addr = (unsigned long)page_address(page);
+	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
+	if (err)
+		goto err_set_direct_map;
+
+	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+
+	return 0;
+
+err_set_direct_map:
+	/*
+	 * If a split of PUD-size page was required, it already happened
+	 * when we marked the pages invalid which guarantees that this call
+	 * won't fail
+	 */
+	set_direct_map_default_noflush(page, nr_pages);
+err_cma_release:
+	cma_release(secretmem_cma, page, nr_pages);
+	return err;
+}
+
+static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
+{
+	unsigned long addr = (unsigned long)page_address(page);
+	struct gen_pool *pool = ctx->pool;
+
+	gen_pool_free(pool, addr, PAGE_SIZE);
+}
+
+static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
+					 gfp_t gfp)
+{
+	struct gen_pool *pool = ctx->pool;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	if (gen_pool_avail(pool) < PAGE_SIZE) {
+		err = secretmem_pool_increase(ctx, gfp);
+		if (err)
+			return NULL;
+	}
+
+	addr = gen_pool_alloc(pool, PAGE_SIZE);
+	if (!addr)
+		return NULL;
+
+	page = virt_to_page(addr);
+	get_page(page);
+
+	return page;
 }
 
 static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 {
+	struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	struct inode *inode = file_inode(vmf->vma->vm_file);
 	pgoff_t offset = vmf->pgoff;
-	unsigned long addr;
 	struct page *page;
 	int err;
 
@@ -62,40 +135,25 @@  static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 retry:
 	page = find_lock_page(mapping, offset);
 	if (!page) {
-		page = secretmem_alloc_page(vmf->gfp_mask);
+		page = secretmem_alloc_page(ctx, vmf->gfp_mask);
 		if (!page)
 			return VM_FAULT_OOM;
 
-		err = set_direct_map_invalid_noflush(page, 1);
-		if (err) {
-			put_page(page);
-			return vmf_error(err);
-		}
-
 		__SetPageUptodate(page);
 		err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
 		if (unlikely(err)) {
+			secretmem_free_page(ctx, page);
 			put_page(page);
 			if (err == -EEXIST)
 				goto retry;
-			goto err_restore_direct_map;
+			return vmf_error(err);
 		}
 
-		addr = (unsigned long)page_address(page);
-		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+		set_page_private(page, (unsigned long)ctx);
 	}
 
 	vmf->page = page;
 	return VM_FAULT_LOCKED;
-
-err_restore_direct_map:
-	/*
-	 * If a split of large page was required, it already happened
-	 * when we marked the page invalid which guarantees that this call
-	 * won't fail
-	 */
-	set_direct_map_default_noflush(page, 1);
-	return vmf_error(err);
 }
 
 static const struct vm_operations_struct secretmem_vm_ops = {
@@ -141,8 +199,9 @@  static int secretmem_migratepage(struct address_space *mapping,
 
 static void secretmem_freepage(struct page *page)
 {
-	set_direct_map_default_noflush(page, 1);
-	clear_highpage(page);
+	struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
+
+	secretmem_free_page(ctx, page);
 }
 
 static const struct address_space_operations secretmem_aops = {
@@ -177,13 +236,18 @@  static struct file *secretmem_file_create(unsigned long flags)
 	if (!ctx)
 		goto err_free_inode;
 
+	ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+	if (!ctx->pool)
+		goto err_free_ctx;
+
 	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
 				 O_RDWR, &secretmem_fops);
 	if (IS_ERR(file))
-		goto err_free_ctx;
+		goto err_free_pool;
 
 	mapping_set_unevictable(inode->i_mapping);
 
+	inode->i_private = ctx;
 	inode->i_mapping->private_data = ctx;
 	inode->i_mapping->a_ops = &secretmem_aops;
 
@@ -197,6 +261,8 @@  static struct file *secretmem_file_create(unsigned long flags)
 
 	return file;
 
+err_free_pool:
+	gen_pool_destroy(ctx->pool);
 err_free_ctx:
 	kfree(ctx);
 err_free_inode:
@@ -215,6 +281,9 @@  SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
 		return -EINVAL;
 
+	if (!secretmem_cma)
+		return -ENOMEM;
+
 	fd = get_unused_fd_flags(flags & O_CLOEXEC);
 	if (fd < 0)
 		return fd;
@@ -235,11 +304,37 @@  SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	return err;
 }
 
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+				    struct gen_pool_chunk *chunk, void *data)
+{
+	unsigned long start = chunk->start_addr;
+	unsigned long end = chunk->end_addr;
+	struct page *page = virt_to_page(start);
+	unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
+	int i;
+
+	set_direct_map_default_noflush(page, nr_pages);
+
+	for (i = 0; i < nr_pages; i++)
+		clear_highpage(page + i);
+
+	cma_release(secretmem_cma, page, nr_pages);
+}
+
+static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
+{
+	struct gen_pool *pool = ctx->pool;
+
+	gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
+	gen_pool_destroy(pool);
+}
+
 static void secretmem_evict_inode(struct inode *inode)
 {
 	struct secretmem_ctx *ctx = inode->i_private;
 
 	truncate_inode_pages_final(&inode->i_data);
+	secretmem_cleanup_pool(ctx);
 	clear_inode(inode);
 	kfree(ctx);
 }
@@ -276,3 +371,29 @@  static int secretmem_init(void)
 	return ret;
 }
 fs_initcall(secretmem_init);
+
+static int __init secretmem_setup(char *str)
+{
+	phys_addr_t align = PMD_SIZE;
+	unsigned long reserved_size;
+	int err;
+
+	reserved_size = memparse(str, NULL);
+	if (!reserved_size)
+		return 0;
+
+	if (reserved_size * 2 > PUD_SIZE)
+		align = PUD_SIZE;
+
+	err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
+				     "secretmem", &secretmem_cma);
+	if (err) {
+		pr_err("failed to create CMA: %d\n", err);
+		return err;
+	}
+
+	pr_info("reserved %luM\n", reserved_size >> 20);
+
+	return 0;
+}
+__setup("secretmem=", secretmem_setup);