mbox series

[v6,0/6] mm: introduce memfd_secret system call to create "secret" memory areas

Message ID 20200924132904.1391-1-rppt@kernel.org (mailing list archive)
Headers show
Series mm: introduce memfd_secret system call to create "secret" memory areas | expand

Message

Mike Rapoport Sept. 24, 2020, 1:28 p.m. UTC
From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

This is an implementation of "secret" mappings backed by a file descriptor. 
I've dropped the boot time reservation patch for now as it is not strictly
required for the basic usage and can be easily added later either with or
without CMA.

v6 changes:
* Silence the warning about missing syscall, thanks to Qian Cai
* Replace spaces with tabs in Kconfig additions, per Randy
* Add a selftest. 

v5 changes:
* rebase on v5.9-rc5
* drop boot time memory reservation patch

v4 changes:
* rebase on v5.9-rc1
* Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
* Make secret mappings exclusive by default and only require flags to
  memfd_secret() system call for uncached mappings, thanks again Kirill :)

v3 changes:
* Squash kernel-parameters.txt update into the commit that added the
  command line option.
* Make uncached mode explicitly selectable by architectures. For now enable
  it only on x86.

v2 changes:
* Follow Michael's suggestion and name the new system call 'memfd_secret'
* Add kernel-parameters documentation about the boot option
* Fix i386-tinyconfig regression reported by the kbuild bot.
  CONFIG_SECRETMEM now depends on !EMBEDDED to disable it on small systems
  from one side and still make it available unconditionally on
  architectures that support SET_DIRECT_MAP.

The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will have desired protection bits set in the user page
table. For instance, current implementation allows uncached mappings.

Although normally Linux userspace mappings are protected from other users, 
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, the secret mappings may be used as a mean to protect guest
memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library
[1] that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

I've hesitated whether to continue to use new flags to memfd_create() or to
add a new system call and I've decided to use a new system call after I've
started to look into man pages update. There would have been two completely
independent descriptions and I think it would have been very confusing.

Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

As the fragmentation of the direct map was one of the major concerns raised
during the previous postings, I've added an amortizing cache of PMD-size
pages to each file descriptor that is used as an allocation pool for the
secret memory areas.

v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org

Mike Rapoport (6):
  mm: add definition of PMD_PAGE_ORDER
  mmap: make mlock_future_check() global
  mm: introduce memfd_secret system call to create "secret" memory areas
  arch, mm: wire up memfd_secret system call were relevant
  mm: secretmem: use PMD-size pages to amortize direct map fragmentation
  secretmem: test: add basic selftest for memfd_secret(2)

 arch/Kconfig                              |   7 +
 arch/arm64/include/asm/unistd.h           |   2 +-
 arch/arm64/include/asm/unistd32.h         |   2 +
 arch/arm64/include/uapi/asm/unistd.h      |   1 +
 arch/riscv/include/asm/unistd.h           |   1 +
 arch/x86/Kconfig                          |   1 +
 arch/x86/entry/syscalls/syscall_32.tbl    |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl    |   1 +
 fs/dax.c                                  |  11 +-
 include/linux/pgtable.h                   |   3 +
 include/linux/syscalls.h                  |   1 +
 include/uapi/asm-generic/unistd.h         |   7 +-
 include/uapi/linux/magic.h                |   1 +
 include/uapi/linux/secretmem.h            |   8 +
 kernel/sys_ni.c                           |   2 +
 mm/Kconfig                                |   4 +
 mm/Makefile                               |   1 +
 mm/internal.h                             |   3 +
 mm/mmap.c                                 |   5 +-
 mm/secretmem.c                            | 333 ++++++++++++++++++++++
 scripts/checksyscalls.sh                  |   4 +
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 296 +++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests    |  17 ++
 25 files changed, 703 insertions(+), 13 deletions(-)
 create mode 100644 include/uapi/linux/secretmem.h
 create mode 100644 mm/secretmem.c
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

Comments

Andrew Morton Sept. 25, 2020, 2:34 a.m. UTC | #1
On Thu, 24 Sep 2020 16:28:58 +0300 Mike Rapoport <rppt@kernel.org> wrote:

> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Hi,
> 
> This is an implementation of "secret" mappings backed by a file descriptor. 
> I've dropped the boot time reservation patch for now as it is not strictly
> required for the basic usage and can be easily added later either with or
> without CMA.
> 
> ...
> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will have desired protection bits set in the user page
> table. For instance, current implementation allows uncached mappings.
> 
> Although normally Linux userspace mappings are protected from other users, 
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.
> 
> Additionally, the secret mappings may be used as a mean to protect guest
> memory in a virtual machine host.
> 
> For demonstration of secret memory usage we've created a userspace library
> [1] that does two things: the first is act as a preloader for openssl to

I can find no [1].

I'm not a fan of the enumerated footnote thing.  Why not inline the url
right here so readers don't need to jump around?
Mike Rapoport Sept. 25, 2020, 6:42 a.m. UTC | #2
On Thu, Sep 24, 2020 at 07:34:28PM -0700, Andrew Morton wrote:
> On Thu, 24 Sep 2020 16:28:58 +0300 Mike Rapoport <rppt@kernel.org> wrote:
> 
> > From: Mike Rapoport <rppt@linux.ibm.com>
> > 
> > Hi,
> > 
> > This is an implementation of "secret" mappings backed by a file descriptor. 
> > I've dropped the boot time reservation patch for now as it is not strictly
> > required for the basic usage and can be easily added later either with or
> > without CMA.
> > 
> > ...
> > 
> > The file descriptor backing secret memory mappings is created using a
> > dedicated memfd_secret system call The desired protection mode for the
> > memory is configured using flags parameter of the system call. The mmap()
> > of the file descriptor created with memfd_secret() will create a "secret"
> > memory mapping. The pages in that mapping will be marked as not present in
> > the direct map and will have desired protection bits set in the user page
> > table. For instance, current implementation allows uncached mappings.
> > 
> > Although normally Linux userspace mappings are protected from other users, 
> > such secret mappings are useful for environments where a hostile tenant is
> > trying to trick the kernel into giving them access to other tenants
> > mappings.
> > 
> > Additionally, the secret mappings may be used as a mean to protect guest
> > memory in a virtual machine host.
> > 
> > For demonstration of secret memory usage we've created a userspace library
> > [1] that does two things: the first is act as a preloader for openssl to
> 
> I can find no [1].

Oops, sorry. It's

https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git/

> I'm not a fan of the enumerated footnote thing.  Why not inline the url
> right here so readers don't need to jump around?
> 
>
Hagen Paul Pfeifer Nov. 1, 2020, 11:09 a.m. UTC | #3
* Mike Rapoport | 2020-09-24 16:28:58 [+0300]:

>This is an implementation of "secret" mappings backed by a file descriptor. 
>I've dropped the boot time reservation patch for now as it is not strictly
>required for the basic usage and can be easily added later either with or
>without CMA.

Isn't memfd_secret currently *unnecessarily* designed to be a "one task
feature"? memfd_secret fulfills exactly two (generic) features:

- address space isolation from kernel (aka SECRET_EXCLUSIVE, not in kernel's
  direct map) - hide from kernel, great
- disabling processor's memory caches against speculative-execution vulnerabilities
  (spectre and friends, aka SECRET_UNCACHED), also great

But, what about the following use-case: implementing a hardened IPC mechanism
where even the kernel is not aware of any data and optionally via SECRET_UNCACHED
even the hardware caches are bypassed! With the patches we are so close to
achieving this.

How? Shared, SECRET_EXCLUSIVE and SECRET_UNCACHED mmaped pages for IPC
involved tasks required to know this mapping (and memfd_secret fd). After IPC
is done, tasks can copy sensitive data from IPC pages into memfd_secret()
pages, un-sensitive data can be used/copied everywhere.

One missing piece is still the secure zeroization of the page(s) if the
mapping is closed by last process to guarantee a secure cleanup. This can
probably done as an general mmap feature, not coupled to memfd_secret() and
can be done independently ("reverse" MAP_UNINITIALIZED feature).

PS: thank you Mike for your effort!

See the following pseudo-code as an example:


// simple assume file-descriptor and mapping is inherited
// by child for simplicity, ptr is 
int fd = memfd_secret(SECRETMEM_UNCACHED);
ftruncate(fd, PAGE_SIZE);
uint32_t *ptr = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

pid_t pid_other;

void signal_handler(int sig)
{
	// update IPC data on shared, uncachaed, exclusive mapped page
	*ptr += 1;
	// inform other
	sleep(1);
	kill(pid_other, SIGUSR1);
}

void ipc_loop(void)
{
	signal(SIGUSR1, signal_handler);
	while (1) {
		sleep(1);
	}
}

int main(void)
{
	pid_t child_pid;

	switch (child_pid = fork()) {
	case 0:
		pid_other = getppid();
		break;
	default:
		pid_other = child_pid
		break;
	}
	
	ipc_loop();
}


Hagen
David Hildenbrand Nov. 2, 2020, 9:11 a.m. UTC | #4
On 24.09.20 15:28, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Hi,
> 
> This is an implementation of "secret" mappings backed by a file descriptor.
> I've dropped the boot time reservation patch for now as it is not strictly
> required for the basic usage and can be easily added later either with or
> without CMA.

Hi Mike,

I'd like to stress again that I'd prefer *any* secretmem allocations 
going via CMA as long as these pages are unmovable. The user can 
allocate a non-significant amount of unmovable allocations only fenced 
by the mlock limit, which behave very different to mlocked pages - they 
are not movable for page compaction/migration.

Assume you have a system with quite some ZONE_MOVABLE memory (esp. in 
virtualized environments), eating up a significant amount of 
!ZONE_MOVABLE memory dynamically at runtime can lead to non-obvious 
issues. It looks like you have plenty of free memory, but the kernel 
might still OOM when trying to do kernel allocations e.g., for 
pagetables. With CMA we at least know what we're dealing with - it 
behaves like ZONE_MOVABLE except for the owner that can place unmovable 
pages there. We can use it to compute statically the amount of 
ZONE_MOVABLE memory we can have in the system without doing harm to the 
system.

Ideally, we would want to support page migration/compaction and allow 
for allocation from ZONE_MOVABLE as well. Would involve temporarily 
mapping, copying, unmapping. Sounds feasible, but not sure which 
roadblocks we would find on the way.

[...]

> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will have desired protection bits set in the user page
> table. For instance, current implementation allows uncached mappings.
> 
> Although normally Linux userspace mappings are protected from other users,
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.
> 
> Additionally, the secret mappings may be used as a mean to protect guest
> memory in a virtual machine host.
> 
> For demonstration of secret memory usage we've created a userspace library
> [1] that does two things: the first is act as a preloader for openssl to
> redirect all the OPENSSL_malloc calls to secret memory meaning any secret
> keys get automatically protected this way and the other thing it does is
> expose the API to the user who needs it. We anticipate that a lot of the
> use cases would be like the openssl one: many toolkits that deal with
> secret keys already have special handling for the memory to try to give
> them greater protection, so this would simply be pluggable into the
> toolkits without any need for user application modification.
> 
> I've hesitated whether to continue to use new flags to memfd_create() or to
> add a new system call and I've decided to use a new system call after I've
> started to look into man pages update. There would have been two completely
> independent descriptions and I think it would have been very confusing.

This was also raised on lwn.net by "dullfire" [1]. I do wonder if it 
would be the right place as well.

[1] https://lwn.net/Articles/835342/#Comments

> 
> Hiding secret memory mappings behind an anonymous file allows (ab)use of
> the page cache for tracking pages allocated for the "secret" mappings as
> well as using address_space_operations for e.g. page migration callbacks.
> 
> The anonymous file may be also used implicitly, like hugetlb files, to
> implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
> ABIs in the future.
> 
> As the fragmentation of the direct map was one of the major concerns raised
> during the previous postings, I've added an amortizing cache of PMD-size
> pages to each file descriptor that is used as an allocation pool for the
> secret memory areas.
David Hildenbrand Nov. 2, 2020, 9:31 a.m. UTC | #5
On 02.11.20 10:11, David Hildenbrand wrote:
> On 24.09.20 15:28, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Hi,
>>
>> This is an implementation of "secret" mappings backed by a file descriptor.
>> I've dropped the boot time reservation patch for now as it is not strictly
>> required for the basic usage and can be easily added later either with or
>> without CMA.
> 
> Hi Mike,
> 
> I'd like to stress again that I'd prefer *any* secretmem allocations
> going via CMA as long as these pages are unmovable. The user can
> allocate a non-significant amount of unmovable allocations only fenced

lol, "non-neglectable" or "significant". Guess I need another coffee :)
Mike Rapoport Nov. 2, 2020, 3:40 p.m. UTC | #6
On Sun, Nov 01, 2020 at 12:09:35PM +0100, Hagen Paul Pfeifer wrote:
> * Mike Rapoport | 2020-09-24 16:28:58 [+0300]:
> 
> >This is an implementation of "secret" mappings backed by a file descriptor. 
> >I've dropped the boot time reservation patch for now as it is not strictly
> >required for the basic usage and can be easily added later either with or
> >without CMA.
> 
> Isn't memfd_secret currently *unnecessarily* designed to be a "one task
> feature"? memfd_secret fulfills exactly two (generic) features:
> 
> - address space isolation from kernel (aka SECRET_EXCLUSIVE, not in kernel's
>   direct map) - hide from kernel, great
> - disabling processor's memory caches against speculative-execution vulnerabilities
>   (spectre and friends, aka SECRET_UNCACHED), also great
> 
> But, what about the following use-case: implementing a hardened IPC mechanism
> where even the kernel is not aware of any data and optionally via SECRET_UNCACHED
> even the hardware caches are bypassed! With the patches we are so close to
> achieving this.
> 
> How? Shared, SECRET_EXCLUSIVE and SECRET_UNCACHED mmaped pages for IPC
> involved tasks required to know this mapping (and memfd_secret fd). After IPC
> is done, tasks can copy sensitive data from IPC pages into memfd_secret()
> pages, un-sensitive data can be used/copied everywhere.

As long as the task share the file descriptor, they can share the
secretmem pages, pretty much like normal memfd.

> One missing piece is still the secure zeroization of the page(s) if the
> mapping is closed by last process to guarantee a secure cleanup. This can
> probably done as an general mmap feature, not coupled to memfd_secret() and
> can be done independently ("reverse" MAP_UNINITIALIZED feature).

There are "init_on_alloc" and "init_on_free" kernel parameters that
enable zeroing of the pages on alloc and on free globally.
Anyway, I'll add zeroing of the freed memory to secretmem.

> PS: thank you Mike for your effort!
> 
> See the following pseudo-code as an example:
> 
> 
> // simple assume file-descriptor and mapping is inherited
> // by child for simplicity, ptr is 
> int fd = memfd_secret(SECRETMEM_UNCACHED);
> ftruncate(fd, PAGE_SIZE);
> uint32_t *ptr = mmap(NULL, PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
 
The ptr here will be visible to both parent and child.

> pid_t pid_other;
> 
> void signal_handler(int sig)
> {
> 	// update IPC data on shared, uncachaed, exclusive mapped page
> 	*ptr += 1;
> 	// inform other
> 	sleep(1);
> 	kill(pid_other, SIGUSR1);
> }
> 
> void ipc_loop(void)
> {
> 	signal(SIGUSR1, signal_handler);
> 	while (1) {
> 		sleep(1);
> 	}
> }
> 
> int main(void)
> {
> 	pid_t child_pid;
> 
> 	switch (child_pid = fork()) {
> 	case 0:
> 		pid_other = getppid();
> 		break;
> 	default:
> 		pid_other = child_pid
> 		break;
> 	}
> 	
> 	ipc_loop();
> }
> 
> 
> Hagen
>
Mike Rapoport Nov. 2, 2020, 5:43 p.m. UTC | #7
On Mon, Nov 02, 2020 at 10:11:12AM +0100, David Hildenbrand wrote:
> On 24.09.20 15:28, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> > 
> > Hi,
> > 
> > This is an implementation of "secret" mappings backed by a file descriptor.
> > I've dropped the boot time reservation patch for now as it is not strictly
> > required for the basic usage and can be easily added later either with or
> > without CMA.
> 
> Hi Mike,
> 
> I'd like to stress again that I'd prefer *any* secretmem allocations going
> via CMA as long as these pages are unmovable. The user can allocate a
> non-significant amount of unmovable allocations only fenced by the mlock
> limit, which behave very different to mlocked pages - they are not movable
> for page compaction/migration.
> 
> Assume you have a system with quite some ZONE_MOVABLE memory (esp. in
> virtualized environments), eating up a significant amount of !ZONE_MOVABLE
> memory dynamically at runtime can lead to non-obvious issues. It looks like
> you have plenty of free memory, but the kernel might still OOM when trying
> to do kernel allocations e.g., for pagetables. With CMA we at least know
> what we're dealing with - it behaves like ZONE_MOVABLE except for the owner
> that can place unmovable pages there. We can use it to compute statically
> the amount of ZONE_MOVABLE memory we can have in the system without doing
> harm to the system.

Why would you say that secretmem allocates from !ZONE_MOVABLE?
If we put boot time reservations aside, the memory allocation for
secretmem follows the same rules as the memory allocations for any file
descriptor. That means we allocate memory with GFP_HIGHUSER_MOVABLE.
After the allocation the memory indeed becomes unmovable but it's not
like we are eating memory from other zones here.

Maybe I'm missing something, but it seems to me that using CMA for any
secretmem allocation would needlessly complicate things.

> Ideally, we would want to support page migration/compaction and allow for
> allocation from ZONE_MOVABLE as well. Would involve temporarily mapping,
> copying, unmapping. Sounds feasible, but not sure which roadblocks we would
> find on the way.

We can support migration/compaction with temporary mapping. The first
roadblock I've hit there was that migration allocates 4K destination
page and if we use it in secret map we are back to scrambling the direct
map into 4K pieces. It still sounds feasible but not as trivial :)

But again, there is nothing in the current form of secretmem that
prevents allocation from ZONE_MOVABLE.

> [...]
> 
> > I've hesitated whether to continue to use new flags to memfd_create() or to
> > add a new system call and I've decided to use a new system call after I've
> > started to look into man pages update. There would have been two completely
> > independent descriptions and I think it would have been very confusing.
> 
> This was also raised on lwn.net by "dullfire" [1]. I do wonder if it would
> be the right place as well.

I lean towards a dedicated syscall because, as I said, to me it would
seem less confusing.

> [1] https://lwn.net/Articles/835342/#Comments
> 
> > 
> > Hiding secret memory mappings behind an anonymous file allows (ab)use of
> > the page cache for tracking pages allocated for the "secret" mappings as
> > well as using address_space_operations for e.g. page migration callbacks.
> > 
> > The anonymous file may be also used implicitly, like hugetlb files, to
> > implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
> > ABIs in the future.
> > 
> > As the fragmentation of the direct map was one of the major concerns raised
> > during the previous postings, I've added an amortizing cache of PMD-size
> > pages to each file descriptor that is used as an allocation pool for the
> > secret memory areas.
David Hildenbrand Nov. 2, 2020, 5:51 p.m. UTC | #8
>> Assume you have a system with quite some ZONE_MOVABLE memory (esp. in
>> virtualized environments), eating up a significant amount of !ZONE_MOVABLE
>> memory dynamically at runtime can lead to non-obvious issues. It looks like
>> you have plenty of free memory, but the kernel might still OOM when trying
>> to do kernel allocations e.g., for pagetables. With CMA we at least know
>> what we're dealing with - it behaves like ZONE_MOVABLE except for the owner
>> that can place unmovable pages there. We can use it to compute statically
>> the amount of ZONE_MOVABLE memory we can have in the system without doing
>> harm to the system.
> 
> Why would you say that secretmem allocates from !ZONE_MOVABLE?
> If we put boot time reservations aside, the memory allocation for
> secretmem follows the same rules as the memory allocations for any file
> descriptor. That means we allocate memory with GFP_HIGHUSER_MOVABLE.

Oh, okay - I missed that! I had the impression that pages are unmovable 
and allocating from ZONE_MOVABLE would be a violation of that?

> After the allocation the memory indeed becomes unmovable but it's not
> like we are eating memory from other zones here.

... and here you have your problem. That's a no-no. We only allow it in 
very special cases where it can't be avoided - e.g., vfio having to pin 
guest memory when passing through memory to VMs.

Hotplug memory, online it to ZONE_MOVABLE. Allocate secretmem. Try to 
unplug the memory again -> endless loop in offline_pages().

Or have a CMA area that gets used with GFP_HIGHUSER_MOVABLE. Allocate 
secretmem. The owner of the area tries to allocate memory - always 
fails. Purpose of CMA destroyed.

> 
>> Ideally, we would want to support page migration/compaction and allow for
>> allocation from ZONE_MOVABLE as well. Would involve temporarily mapping,
>> copying, unmapping. Sounds feasible, but not sure which roadblocks we would
>> find on the way.
> 
> We can support migration/compaction with temporary mapping. The first
> roadblock I've hit there was that migration allocates 4K destination
> page and if we use it in secret map we are back to scrambling the direct
> map into 4K pieces. It still sounds feasible but not as trivial :)

That sounds like the proper way for me to do it then.

> 
> But again, there is nothing in the current form of secretmem that
> prevents allocation from ZONE_MOVABLE.

Oh, there is something: That the pages are not movable.
Mike Rapoport Nov. 3, 2020, 9:52 a.m. UTC | #9
On Mon, Nov 02, 2020 at 06:51:09PM +0100, David Hildenbrand wrote:
> > > Assume you have a system with quite some ZONE_MOVABLE memory (esp. in
> > > virtualized environments), eating up a significant amount of !ZONE_MOVABLE
> > > memory dynamically at runtime can lead to non-obvious issues. It looks like
> > > you have plenty of free memory, but the kernel might still OOM when trying
> > > to do kernel allocations e.g., for pagetables. With CMA we at least know
> > > what we're dealing with - it behaves like ZONE_MOVABLE except for the owner
> > > that can place unmovable pages there. We can use it to compute statically
> > > the amount of ZONE_MOVABLE memory we can have in the system without doing
> > > harm to the system.
> > 
> > Why would you say that secretmem allocates from !ZONE_MOVABLE?
> > If we put boot time reservations aside, the memory allocation for
> > secretmem follows the same rules as the memory allocations for any file
> > descriptor. That means we allocate memory with GFP_HIGHUSER_MOVABLE.
> 
> Oh, okay - I missed that! I had the impression that pages are unmovable and
> allocating from ZONE_MOVABLE would be a violation of that?
> 
> > After the allocation the memory indeed becomes unmovable but it's not
> > like we are eating memory from other zones here.
> 
> ... and here you have your problem. That's a no-no. We only allow it in very
> special cases where it can't be avoided - e.g., vfio having to pin guest
> memory when passing through memory to VMs.
> 
> Hotplug memory, online it to ZONE_MOVABLE. Allocate secretmem. Try to unplug
> the memory again -> endless loop in offline_pages().
> 
> Or have a CMA area that gets used with GFP_HIGHUSER_MOVABLE. Allocate
> secretmem. The owner of the area tries to allocate memory - always fails.
> Purpose of CMA destroyed.
> 
> > 
> > > Ideally, we would want to support page migration/compaction and allow for
> > > allocation from ZONE_MOVABLE as well. Would involve temporarily mapping,
> > > copying, unmapping. Sounds feasible, but not sure which roadblocks we would
> > > find on the way.
> > 
> > We can support migration/compaction with temporary mapping. The first
> > roadblock I've hit there was that migration allocates 4K destination
> > page and if we use it in secret map we are back to scrambling the direct
> > map into 4K pieces. It still sounds feasible but not as trivial :)
> 
> That sounds like the proper way for me to do it then.
 
Although migration of secretmem pages sounds feasible now, there maybe
other issues I didn't see because I'm not very familiar with
migration/compaction code.

I've looked again at CMA and I'm inclined to agree with you that using
CMA for secretmem allocations could be the right thing.
David Hildenbrand Nov. 3, 2020, 10:11 a.m. UTC | #10
On 03.11.20 10:52, Mike Rapoport wrote:
> On Mon, Nov 02, 2020 at 06:51:09PM +0100, David Hildenbrand wrote:
>>>> Assume you have a system with quite some ZONE_MOVABLE memory (esp. in
>>>> virtualized environments), eating up a significant amount of !ZONE_MOVABLE
>>>> memory dynamically at runtime can lead to non-obvious issues. It looks like
>>>> you have plenty of free memory, but the kernel might still OOM when trying
>>>> to do kernel allocations e.g., for pagetables. With CMA we at least know
>>>> what we're dealing with - it behaves like ZONE_MOVABLE except for the owner
>>>> that can place unmovable pages there. We can use it to compute statically
>>>> the amount of ZONE_MOVABLE memory we can have in the system without doing
>>>> harm to the system.
>>>
>>> Why would you say that secretmem allocates from !ZONE_MOVABLE?
>>> If we put boot time reservations aside, the memory allocation for
>>> secretmem follows the same rules as the memory allocations for any file
>>> descriptor. That means we allocate memory with GFP_HIGHUSER_MOVABLE.
>>
>> Oh, okay - I missed that! I had the impression that pages are unmovable and
>> allocating from ZONE_MOVABLE would be a violation of that?
>>
>>> After the allocation the memory indeed becomes unmovable but it's not
>>> like we are eating memory from other zones here.
>>
>> ... and here you have your problem. That's a no-no. We only allow it in very
>> special cases where it can't be avoided - e.g., vfio having to pin guest
>> memory when passing through memory to VMs.
>>
>> Hotplug memory, online it to ZONE_MOVABLE. Allocate secretmem. Try to unplug
>> the memory again -> endless loop in offline_pages().
>>
>> Or have a CMA area that gets used with GFP_HIGHUSER_MOVABLE. Allocate
>> secretmem. The owner of the area tries to allocate memory - always fails.
>> Purpose of CMA destroyed.
>>
>>>
>>>> Ideally, we would want to support page migration/compaction and allow for
>>>> allocation from ZONE_MOVABLE as well. Would involve temporarily mapping,
>>>> copying, unmapping. Sounds feasible, but not sure which roadblocks we would
>>>> find on the way.
>>>
>>> We can support migration/compaction with temporary mapping. The first
>>> roadblock I've hit there was that migration allocates 4K destination
>>> page and if we use it in secret map we are back to scrambling the direct
>>> map into 4K pieces. It still sounds feasible but not as trivial :)
>>
>> That sounds like the proper way for me to do it then.
>   
> Although migration of secretmem pages sounds feasible now, there maybe
> other issues I didn't see because I'm not very familiar with
> migration/compaction code.

Migration of PMDs might also be feasible -  and it would be even 
cleaner. But I agree that that might require more work and starting with 
something simpler (!movable) is the right way to move forward.
Hagen Paul Pfeifer Nov. 3, 2020, 1:52 p.m. UTC | #11
> On 11/02/2020 4:40 PM Mike Rapoport <rppt@kernel.org> wrote:

> > Isn't memfd_secret currently *unnecessarily* designed to be a "one task
> > feature"? memfd_secret fulfills exactly two (generic) features:
> > 
> > - address space isolation from kernel (aka SECRET_EXCLUSIVE, not in kernel's
> >   direct map) - hide from kernel, great
> > - disabling processor's memory caches against speculative-execution vulnerabilities
> >   (spectre and friends, aka SECRET_UNCACHED), also great
> > 
> > But, what about the following use-case: implementing a hardened IPC mechanism
> > where even the kernel is not aware of any data and optionally via SECRET_UNCACHED
> > even the hardware caches are bypassed! With the patches we are so close to
> > achieving this.
> > 
> > How? Shared, SECRET_EXCLUSIVE and SECRET_UNCACHED mmaped pages for IPC
> > involved tasks required to know this mapping (and memfd_secret fd). After IPC
> > is done, tasks can copy sensitive data from IPC pages into memfd_secret()
> > pages, un-sensitive data can be used/copied everywhere.
> 
> As long as the task share the file descriptor, they can share the
> secretmem pages, pretty much like normal memfd.

Including process_vm_readv() and process_vm_writev()? Let's take a hypothetical
"dbus-daemon-secure" service that receives data from process A and wants to
copy/distribute it to data areas of N other processes. Much like dbus but without
SOCK_DGRAM rather direct copy into secretmem/mmap pages (ring-buffer). Should be
possible, right?

> > One missing piece is still the secure zeroization of the page(s) if the
> > mapping is closed by last process to guarantee a secure cleanup. This can
> > probably done as an general mmap feature, not coupled to memfd_secret() and
> > can be done independently ("reverse" MAP_UNINITIALIZED feature).
> 
> There are "init_on_alloc" and "init_on_free" kernel parameters that
> enable zeroing of the pages on alloc and on free globally.
> Anyway, I'll add zeroing of the freed memory to secretmem.

Great, this allows page-specific (thus runtime-performance-optimized) zeroing
of secured pages. init_on_free lowers the performance to much and is not precice
enough.

Hagen
Mike Rapoport Nov. 3, 2020, 4:30 p.m. UTC | #12
On Tue, Nov 03, 2020 at 02:52:14PM +0100, Hagen Paul Pfeifer wrote:
> > On 11/02/2020 4:40 PM Mike Rapoport <rppt@kernel.org> wrote:
> 
> > > Isn't memfd_secret currently *unnecessarily* designed to be a "one task
> > > feature"? memfd_secret fulfills exactly two (generic) features:
> > > 
> > > - address space isolation from kernel (aka SECRET_EXCLUSIVE, not in kernel's
> > >   direct map) - hide from kernel, great
> > > - disabling processor's memory caches against speculative-execution vulnerabilities
> > >   (spectre and friends, aka SECRET_UNCACHED), also great
> > > 
> > > But, what about the following use-case: implementing a hardened IPC mechanism
> > > where even the kernel is not aware of any data and optionally via SECRET_UNCACHED
> > > even the hardware caches are bypassed! With the patches we are so close to
> > > achieving this.
> > > 
> > > How? Shared, SECRET_EXCLUSIVE and SECRET_UNCACHED mmaped pages for IPC
> > > involved tasks required to know this mapping (and memfd_secret fd). After IPC
> > > is done, tasks can copy sensitive data from IPC pages into memfd_secret()
> > > pages, un-sensitive data can be used/copied everywhere.
> > 
> > As long as the task share the file descriptor, they can share the
> > secretmem pages, pretty much like normal memfd.
> 
> Including process_vm_readv() and process_vm_writev()? Let's take a hypothetical
> "dbus-daemon-secure" service that receives data from process A and wants to
> copy/distribute it to data areas of N other processes. Much like dbus but without
> SOCK_DGRAM rather direct copy into secretmem/mmap pages (ring-buffer). Should be
> possible, right?

I'm not sure I follow you here.
For process_vm_readv() and process_vm_writev() secremem will be only
accessible on the local part, but not on the remote.
So copying data to secretmem pages using process_vm_writev wouldn't
work.

> > > One missing piece is still the secure zeroization of the page(s) if the
> > > mapping is closed by last process to guarantee a secure cleanup. This can
> > > probably done as an general mmap feature, not coupled to memfd_secret() and
> > > can be done independently ("reverse" MAP_UNINITIALIZED feature).
> > 
> > There are "init_on_alloc" and "init_on_free" kernel parameters that
> > enable zeroing of the pages on alloc and on free globally.
> > Anyway, I'll add zeroing of the freed memory to secretmem.
> 
> Great, this allows page-specific (thus runtime-performance-optimized) zeroing
> of secured pages. init_on_free lowers the performance to much and is not precice
> enough.
> 
> Hagen
Hagen Paul Pfeifer Nov. 4, 2020, 11:39 a.m. UTC | #13
> On 11/03/2020 5:30 PM Mike Rapoport <rppt@kernel.org> wrote:
> 
> > > As long as the task share the file descriptor, they can share the
> > > secretmem pages, pretty much like normal memfd.
> > 
> > Including process_vm_readv() and process_vm_writev()? Let's take a hypothetical
> > "dbus-daemon-secure" service that receives data from process A and wants to
> > copy/distribute it to data areas of N other processes. Much like dbus but without
> > SOCK_DGRAM rather direct copy into secretmem/mmap pages (ring-buffer). Should be
> > possible, right?
> 
> I'm not sure I follow you here.
> For process_vm_readv() and process_vm_writev() secremem will be only
> accessible on the local part, but not on the remote.
> So copying data to secretmem pages using process_vm_writev wouldn't
> work.

A hypothetical "dbus-daemon-secure" service will not be *process related* with communication
peers. E.g. a password-input process (reading a password into secured-memory page) will
transfer the password to dbus-daemon-secure and this service will hand-over the password to
two additional applications: a IPsec process on CPU0 und CPU1 (which itself use a
secured-memory page).

So four applications IPC chain:
 password-input -> dbus-daemon-secure -> {IPsec0, IPsec1}

- password-input: uses a secured page to read/save the password locally after reading from TTY
- dbus-daemon-secure: uses a secured page for IPC (legitimate user can write and read into the secured page)
- IPSecN has secured page to save the password locally (and probably other data as well), IPC memory is memset'ed after copy

Goal: the whole password is never saved/touched on non secured pages during IPC transfer.

Question: maybe a *file-descriptor passing* mechanism can do the trick? I.e. dbus-daemon-secure
allocates via memfd_secret/mmap secure pages and permitted processes will get the descriptor/mmaped-page
passed so they can use the pages directly?

Hagen
Mike Rapoport Nov. 4, 2020, 5:02 p.m. UTC | #14
On Wed, Nov 04, 2020 at 12:39:13PM +0100, Hagen Paul Pfeifer wrote:
> > On 11/03/2020 5:30 PM Mike Rapoport <rppt@kernel.org> wrote:
> > 
> > > > As long as the task share the file descriptor, they can share the
> > > > secretmem pages, pretty much like normal memfd.
> > > 
> > > Including process_vm_readv() and process_vm_writev()? Let's take a hypothetical
> > > "dbus-daemon-secure" service that receives data from process A and wants to
> > > copy/distribute it to data areas of N other processes. Much like dbus but without
> > > SOCK_DGRAM rather direct copy into secretmem/mmap pages (ring-buffer). Should be
> > > possible, right?
> > 
> > I'm not sure I follow you here.
> > For process_vm_readv() and process_vm_writev() secremem will be only
> > accessible on the local part, but not on the remote.
> > So copying data to secretmem pages using process_vm_writev wouldn't
> > work.
> 
> A hypothetical "dbus-daemon-secure" service will not be *process related* with communication
> peers. E.g. a password-input process (reading a password into secured-memory page) will
> transfer the password to dbus-daemon-secure and this service will hand-over the password to
> two additional applications: a IPsec process on CPU0 und CPU1 (which itself use a
> secured-memory page).
> 
> So four applications IPC chain:
>  password-input -> dbus-daemon-secure -> {IPsec0, IPsec1}
> 
> - password-input: uses a secured page to read/save the password locally after reading from TTY
> - dbus-daemon-secure: uses a secured page for IPC (legitimate user can write and read into the secured page)
> - IPSecN has secured page to save the password locally (and probably other data as well), IPC memory is memset'ed after copy
> 
> Goal: the whole password is never saved/touched on non secured pages during IPC transfer.
> 
> Question: maybe a *file-descriptor passing* mechanism can do the trick? I.e. dbus-daemon-secure
> allocates via memfd_secret/mmap secure pages and permitted processes will get the descriptor/mmaped-page
> passed so they can use the pages directly?

Yes, this will work. The processes that share the memfd_secret file
descriptor will have access to the same memory pages, pretty much like
with shared memory.

> Hagen
Hagen Paul Pfeifer Nov. 9, 2020, 10:41 a.m. UTC | #15
> On 11/04/2020 6:02 PM Mike Rapoport <rppt@kernel.org> wrote:
> 
> Yes, this will work. The processes that share the memfd_secret file
> descriptor will have access to the same memory pages, pretty much like
> with shared memory.

Perfect!

Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>

Thank you for the effort Mike, if zeroize feature will also included it will
be great! The memset-all-pages after use is just overkill, a dedicated flag for
memfd_secret (or mmap) would be superior.

Hagen