diff mbox series

[RFC,v2,1/3] x86/sgx: Add SGX specific LSM hooks

Message ID 72420cff8fa944b64e57df8d25c63bd30f8aacfa.1561588012.git.cedric.xing@intel.com (mailing list archive)
State Superseded
Headers show
Series security/x86/sgx: SGX specific LSM hooks | expand

Commit Message

Xing, Cedric June 27, 2019, 6:56 p.m. UTC
SGX enclaves are loaded from pages in regular memory. Given the ability to
create executable pages, the newly added SGX subsystem may present a backdoor
for adversaries to circumvent LSM policies, such as creating an executable
enclave page from a modified regular page that would otherwise not be made
executable as prohibited by LSM. Therefore arises the primary question of
whether an enclave page should be allowed to be created from a given source
page in regular memory.

A related question is whether to grant/deny a mprotect() request on a given
enclave page/range. mprotect() is traditionally covered by
security_file_mprotect() hook, however, enclave pages have a different lifespan
than either MAP_PRIVATE or MAP_SHARED. Particularly, MAP_PRIVATE pages have the
same lifespan as the VMA while MAP_SHARED pages have the same lifespan as the
backing file (on disk), but enclave pages have the lifespan of the enclave’s
file descriptor. For example, enclave pages could be munmap()’ed then mmap()’ed
again without losing contents (like MAP_SHARED), but all enclave pages will be
lost once its file descriptor has been closed (like MAP_PRIVATE). That said,
LSM modules need some new data structure for tracking protections of enclave
pages/ranges so that they can make proper decisions at mmap()/mprotect()
syscalls.

The last question, which is orthogonal to the 2 above, is whether or not to
allow a given enclave to launch/run. Enclave pages are not visible to the rest
of the system, so to some extent offer a better place for malicious software to
hide. Thus, it is sometimes desirable to whitelist/blacklist enclaves by their
measurements, signing public keys, or image files.

To address the questions above, 2 new LSM hooks are added for enclaves.
  - security_enclave_load() – This hook allows LSM to decide whether or not to
    allow instantiation of a range of enclave pages using the specified VMA. It
    is invoked when a range of enclave pages is about to be loaded. It serves 3
    purposes: 1) indicate to LSM that the file struct in subject is an enclave;
    2) allow LSM to decide whether or not to instantiate those pages and 3)
    allow LSM to initialize internal data structures for tracking
    origins/protections of those pages.
  - security_enclave_init() – This hook allows whitelisting/blacklisting or
    performing whatever checks deemed appropriate before an enclave is allowed
    to run. An LSM module may opt to use the file backing the SIGSTRUCT as a
    proxy to dictate allowed protections for anonymous pages.

mprotect() of enclave pages continue to be governed by
security_file_mprotect(), with the expectation that LSM is able to distinguish
between regular and enclave pages inside the hook. For mmap(), the SGX
subsystem is expected to invoke security_file_mprotect() explicitly to check
protections against the requested protections for existing enclave pages. As
stated earlier, enclave pages have different lifespan than the existing
MAP_PRIVATE and MAP_SHARED pages, so would require a new data structure outside
of VMA to track their protections and/or origins. Enclave Memory Area (or EMA
for short) has been introduced to address the need. EMAs are maintained by the
LSM framework for all LSM modules to share. EMAs will be instantiated for
enclaves only so will not impose memory/performance overheads for regular
applications/files. Please see include/linux/lsm_ema.h and security/lsm_ema.c
for details.

A new setup parameter – lsm.ema.cache_decisions has been introduced to offer
the choice between memory consumption and accuracy of audit logs. Enabling
lsm.ema.cache_decisions causes LSM framework NOT to keep backing files open for
EMAs. While that saves memory, it requires LSM modules to make and cache
decisions ahead of time, and makes it difficult for LSM modules to generate
accurate audit logs. System administrators are expected to run LSM in
permissive mode with lsm.ema.cache_decisions off to determine the minimal
permissions needed, and then turn it back on in enforcing mode for optimal
performance and memory usage. lsm.ema.cache_decisions is on by default and
could be turned off by appending “lsm.ema.cache_decisions=0” or
“lsm.ema.cache_decisions=off” to the kernel command line.

Signed-off-by: Cedric Xing <cedric.xing@intel.com>
---
 include/linux/lsm_ema.h   | 171 ++++++++++++++++++++++++++++++++++++++
 include/linux/lsm_hooks.h |  29 +++++++
 include/linux/security.h  |  23 +++++
 security/Makefile         |   1 +
 security/lsm_ema.c        | 132 +++++++++++++++++++++++++++++
 security/security.c       |  47 ++++++++++-
 6 files changed, 402 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/lsm_ema.h
 create mode 100644 security/lsm_ema.c

Comments

Casey Schaufler June 27, 2019, 10:06 p.m. UTC | #1
On 6/27/2019 11:56 AM, Cedric Xing wrote:
> SGX enclaves are loaded from pages in regular memory. Given the ability to
> create executable pages, the newly added SGX subsystem may present a backdoor
> for adversaries to circumvent LSM policies, such as creating an executable
> enclave page from a modified regular page that would otherwise not be made
> executable as prohibited by LSM. Therefore arises the primary question of
> whether an enclave page should be allowed to be created from a given source
> page in regular memory.
>
> A related question is whether to grant/deny a mprotect() request on a given
> enclave page/range. mprotect() is traditionally covered by
> security_file_mprotect() hook, however, enclave pages have a different lifespan
> than either MAP_PRIVATE or MAP_SHARED. Particularly, MAP_PRIVATE pages have the
> same lifespan as the VMA while MAP_SHARED pages have the same lifespan as the
> backing file (on disk), but enclave pages have the lifespan of the enclave’s
> file descriptor. For example, enclave pages could be munmap()’ed then mmap()’ed
> again without losing contents (like MAP_SHARED), but all enclave pages will be
> lost once its file descriptor has been closed (like MAP_PRIVATE). That said,
> LSM modules need some new data structure for tracking protections of enclave
> pages/ranges so that they can make proper decisions at mmap()/mprotect()
> syscalls.
>
> The last question, which is orthogonal to the 2 above, is whether or not to
> allow a given enclave to launch/run. Enclave pages are not visible to the rest
> of the system, so to some extent offer a better place for malicious software to
> hide. Thus, it is sometimes desirable to whitelist/blacklist enclaves by their
> measurements, signing public keys, or image files.
>
> To address the questions above, 2 new LSM hooks are added for enclaves.
>   - security_enclave_load() – This hook allows LSM to decide whether or not to
>     allow instantiation of a range of enclave pages using the specified VMA. It
>     is invoked when a range of enclave pages is about to be loaded. It serves 3
>     purposes: 1) indicate to LSM that the file struct in subject is an enclave;
>     2) allow LSM to decide whether or not to instantiate those pages and 3)
>     allow LSM to initialize internal data structures for tracking
>     origins/protections of those pages.
>   - security_enclave_init() – This hook allows whitelisting/blacklisting or
>     performing whatever checks deemed appropriate before an enclave is allowed
>     to run. An LSM module may opt to use the file backing the SIGSTRUCT as a
>     proxy to dictate allowed protections for anonymous pages.
>
> mprotect() of enclave pages continue to be governed by
> security_file_mprotect(), with the expectation that LSM is able to distinguish
> between regular and enclave pages inside the hook. For mmap(), the SGX
> subsystem is expected to invoke security_file_mprotect() explicitly to check
> protections against the requested protections for existing enclave pages. As
> stated earlier, enclave pages have different lifespan than the existing
> MAP_PRIVATE and MAP_SHARED pages, so would require a new data structure outside
> of VMA to track their protections and/or origins. Enclave Memory Area (or EMA
> for short) has been introduced to address the need. EMAs are maintained by the
> LSM framework for all LSM modules to share. EMAs will be instantiated for
> enclaves only so will not impose memory/performance overheads for regular
> applications/files. Please see include/linux/lsm_ema.h and security/lsm_ema.c
> for details.
>
> A new setup parameter – lsm.ema.cache_decisions has been introduced to offer
> the choice between memory consumption and accuracy of audit logs. Enabling
> lsm.ema.cache_decisions causes LSM framework NOT to keep backing files open for
> EMAs. While that saves memory, it requires LSM modules to make and cache
> decisions ahead of time, and makes it difficult for LSM modules to generate
> accurate audit logs. System administrators are expected to run LSM in
> permissive mode with lsm.ema.cache_decisions off to determine the minimal
> permissions needed, and then turn it back on in enforcing mode for optimal
> performance and memory usage. lsm.ema.cache_decisions is on by default and
> could be turned off by appending “lsm.ema.cache_decisions=0” or
> “lsm.ema.cache_decisions=off” to the kernel command line.
>
> Signed-off-by: Cedric Xing <cedric.xing@intel.com>
> ---
>  include/linux/lsm_ema.h   | 171 ++++++++++++++++++++++++++++++++++++++
>  include/linux/lsm_hooks.h |  29 +++++++
>  include/linux/security.h  |  23 +++++
>  security/Makefile         |   1 +
>  security/lsm_ema.c        | 132 +++++++++++++++++++++++++++++
>  security/security.c       |  47 ++++++++++-
>  6 files changed, 402 insertions(+), 1 deletion(-)
>  create mode 100644 include/linux/lsm_ema.h
>  create mode 100644 security/lsm_ema.c

Don't use "lsm_ema". This isn't LSM infrastructure.
Three letter abbreviations are easy to type, but are
doomed to encounter conflicts and lead to confusion.
I suggest that you use "enclave", because it doesn't
start off conflicting with anything and is descriptive.

This code should not be mixed in with the LSM infrastructure.
It should all be contained in its own module, under
security/enclave.

> diff --git a/include/linux/lsm_ema.h b/include/linux/lsm_ema.h
> new file mode 100644
> index 000000000000..a09b8f96da05
> --- /dev/null
> +++ b/include/linux/lsm_ema.h

There's no need for this header to be used outside the enclave
LSM. It should be "security/enclave/enclave.h"


> @@ -0,0 +1,171 @@
> +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
> +/**
> + * Enclave Memory Area interface for LSM modules
> + *
> + * Copyright(c) 2016-19 Intel Corporation.
> + */
> +
> +#ifndef _LSM_EMA_H_
> +#define _LSM_EMA_H_
> +
> +#include <linux/list.h>
> +#include <linux/mutex.h>
> +#include <linux/fs.h>
> +#include <linux/file.h>
> +
> +/**
> + * lsm_ema - LSM Enclave Memory Area structure

How about s/lsm_ema/enclave/ ?

> + *
> + * Data structure to track origins of enclave pages
> + *
> + * @link:
> + *	Link to adjacent EMAs. EMAs are sorted by their addresses in ascending
> + *	order
> + * @start:
> + *	Starting address
> + * @end:
> + *	Ending address
> + * @source:
> + *	File from which this range was loaded from, or NULL if not loaded from
> + *	any files
> + */
> +struct lsm_ema {
> +	struct list_head	link;
> +	size_t			start;
> +	size_t			end;
> +	struct file		*source;
> +};
> +
> +#define lsm_ema_data(ema, blob_sizes)	\
> +	((char *)((struct lsm_ema *)(ema) + 1) + blob_sizes.lbs_ema_data)

Who uses this? The enclave LSM? Convention would have this
be selinux_enclave(ema) for the SELinux code. This is
inconsistent with the way other blobs are handled.

> +
> +/**
> + * lsm_ema_map - LSM Enclave Memory Map structure

enclave_map

> + *
> + * Container for EMAs of an enclave
> + *
> + * @list:
> + *	Head of a list of sorted EMAs
> + * @lock:
> + *	Acquire before querying/updateing the list EMAs
> + */
> +struct lsm_ema_map {
> +	struct list_head	list;
> +	struct mutex		lock;
> +};
> +
> +/**
> + * These are functions to be used by the LSM framework, and must be defined
> + * regardless CONFIG_INTEL_SGX is enabled or not.

Not acceptable for the LSM infrastructure. They
are inconsistent with the way data is used there.

> + */
> +
> +#ifdef CONFIG_INTEL_SGX
> +void lsm_ema_global_init(size_t);
> +void lsm_free_ema_map(atomic_long_t *);
> +#else
> +static inline void lsm_ema_global_init(size_t ema_data_size)
> +{
> +}
> +
> +static inline void lsm_free_ema_map(atomic_long_t *p)
> +{
> +}
> +#endif
> +
> +/**
> + * Below are APIs to be used by LSM modules
> + */
> +
> +struct lsm_ema_map *lsm_init_or_get_ema_map(atomic_long_t *);
> +struct lsm_ema *lsm_alloc_ema(void);

Do you mean security_alloc_enclave()?
That would go into security/security.h

> +void lsm_free_ema(struct lsm_ema *);

Do you mean security_free_enclave()?
That would go into security/security.h

> +void lsm_init_ema(struct lsm_ema *, size_t, size_t, struct file *);

This goes in the enclave LSM.

> +int lsm_merge_ema(struct lsm_ema *, struct lsm_ema_map *);
> +struct lsm_ema *lsm_split_ema(struct lsm_ema *, size_t, struct lsm_ema_map *);
> +
> +static inline struct lsm_ema_map *lsm_get_ema_map(struct file *f)
> +{
> +	return (void *)atomic_long_read(f->f_security);
> +}
> +
> +static inline int __must_check lsm_lock_ema(struct lsm_ema_map *map)
> +{
> +	return mutex_lock_interruptible(&map->lock);
> +}
> +
> +static inline void lsm_unlock_ema(struct lsm_ema_map *map)
> +{
> +	mutex_unlock(&map->lock);
> +}
> +
> +static inline struct lsm_ema *lsm_prev_ema(struct lsm_ema *p,
> +					   struct lsm_ema_map *map)
> +{
> +	p = list_prev_entry(p, link);
> +	return &p->link == &map->list ? NULL : p;
> +}
> +
> +static inline struct lsm_ema *lsm_next_ema(struct lsm_ema *p,
> +					   struct lsm_ema_map *map)
> +{
> +	p = list_next_entry(p, link);
> +	return &p->link == &map->list ? NULL : p;
> +}
> +
> +static inline struct lsm_ema *lsm_find_ema(struct lsm_ema_map *map, size_t a)
> +{
> +	struct lsm_ema *p;
> +
> +	BUG_ON(!mutex_is_locked(&map->lock));
> +
> +	list_for_each_entry(p, &map->list, link)
> +		if (a < p->end)
> +			break;
> +	return &p->link == &map->list ? NULL : p;
> +}
> +
> +static inline int lsm_insert_ema(struct lsm_ema_map *map, struct lsm_ema *n)
> +{
> +	struct lsm_ema *p = lsm_find_ema(map, n->start);
> +
> +	if (!p)
> +		list_add_tail(&n->link, &map->list);
> +	else if (n->end <= p->start)
> +		list_add_tail(&n->link, &p->link);
> +	else
> +		return -EEXIST;
> +
> +	lsm_merge_ema(n, map);
> +	if (p)
> +		lsm_merge_ema(p, map);
> +	return 0;
> +}
> +
> +static inline int lsm_for_each_ema(struct lsm_ema_map *map, size_t start,
> +				   size_t end, int (*cb)(struct lsm_ema *,
> +							 void *), void *arg)
> +{
> +	struct lsm_ema *ema;
> +	int rc;
> +
> +	ema = lsm_find_ema(map, start);
> +	while (ema && end > ema->start) {
> +		if (start > ema->start)
> +			lsm_split_ema(ema, start, map);
> +		if (end < ema->end)
> +			ema = lsm_split_ema(ema, end, map);
> +
> +		rc = (*cb)(ema, arg);
> +		lsm_merge_ema(ema, map);
> +		if (rc)
> +			return rc;
> +
> +		ema = lsm_next_ema(ema, map);
> +	}
> +
> +	if (ema)
> +		lsm_merge_ema(ema, map);
> +	return 0;
> +}

There is no way that these belong as part of the LSM
infrastructure. If you need an enclave management API
you need to find some other place for it.

> +
> +#endif /* _LSM_EMA_H_ */
> diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
> index 47f58cfb6a19..ade1f9f81e64 100644
> --- a/include/linux/lsm_hooks.h
> +++ b/include/linux/lsm_hooks.h
> @@ -29,6 +29,8 @@
>  #include <linux/init.h>
>  #include <linux/rculist.h>
>  
> +struct lsm_ema;
> +
>  /**
>   * union security_list_options - Linux Security Module hook function list
>   *
> @@ -1446,6 +1448,21 @@
>   * @bpf_prog_free_security:
>   *	Clean up the security information stored inside bpf prog.
>   *
> + * @enclave_load:
> + *	Decide if a range of pages shall be allowed to be loaded into an
> + *	enclave
> + *
> + *	@encl points to the file identifying the target enclave
> + *	@ema specifies the target range to be loaded
> + *	@flags contains protections being requested for the target range
> + *	@source points to the VMA containing the source pages to be loaded
> + *
> + * @enclave_init:
> + *	Decide if an enclave shall be allowed to launch
> + *
> + *	@encl points to the file identifying the target enclave being launched
> + *	@sigstruct contains a copy of the SIGSTRUCT in kernel memory
> + *	@source points to the VMA backing SIGSTRUCT in user memory
>   */
>  union security_list_options {
>  	int (*binder_set_context_mgr)(struct task_struct *mgr);
> @@ -1807,6 +1824,13 @@ union security_list_options {
>  	int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux);
>  	void (*bpf_prog_free_security)(struct bpf_prog_aux *aux);
>  #endif /* CONFIG_BPF_SYSCALL */
> +
> +#ifdef CONFIG_INTEL_SGX
> +	int (*enclave_load)(struct file *encl, struct lsm_ema *ema,
> +			    size_t flags, struct vm_area_struct *source);
> +	int (*enclave_init)(struct file *encl, struct sgx_sigstruct *sigstruct,
> +			    struct vm_area_struct *source);
> +#endif
>  };
>  
>  struct security_hook_heads {
> @@ -2046,6 +2070,10 @@ struct security_hook_heads {
>  	struct hlist_head bpf_prog_alloc_security;
>  	struct hlist_head bpf_prog_free_security;
>  #endif /* CONFIG_BPF_SYSCALL */
> +#ifdef CONFIG_INTEL_SGX
> +	struct hlist_head enclave_load;
> +	struct hlist_head enclave_init;
> +#endif
>  } __randomize_layout;
>  
>  /*
> @@ -2069,6 +2097,7 @@ struct lsm_blob_sizes {
>  	int	lbs_ipc;
>  	int	lbs_msg_msg;
>  	int	lbs_task;
> +	int	lbs_ema_data;

Is a module like SELinux expected to have its own
data for enclave? That's the only case where you would
have a enclave entry in the blob.

>  };
>  
>  /*
> diff --git a/include/linux/security.h b/include/linux/security.h
> index 659071c2e57c..52c200810004 100644
> --- a/include/linux/security.h
> +++ b/include/linux/security.h
> @@ -1829,5 +1829,28 @@ static inline void security_bpf_prog_free(struct bpf_prog_aux *aux)
>  #endif /* CONFIG_SECURITY */
>  #endif /* CONFIG_BPF_SYSCALL */
>  
> +#ifdef CONFIG_INTEL_SGX
> +struct sgx_sigstruct;
> +#ifdef CONFIG_SECURITY
> +int security_enclave_load(struct file *encl, size_t start, size_t end,
> +			  size_t flags, struct vm_area_struct *source);
> +int security_enclave_init(struct file *encl, struct sgx_sigstruct *sigstruct,
> +			  struct vm_area_struct *source);
> +#else
> +static inline int security_enclave_load(struct file *encl, size_t start,
> +					size_t end, struct vm_area_struct *src)
> +{
> +	return 0;
> +}
> +
> +static inline int security_enclave_init(struct file *encl,
> +					struct sgx_sigstruct *sigstruct,
> +					struct vm_area_struct *src)
> +{
> +	return 0;
> +}
> +#endif /* CONFIG_SECURITY */
> +#endif /* CONFIG_INTEL_SGX */
> +
>  #endif /* ! __LINUX_SECURITY_H */
>  
> diff --git a/security/Makefile b/security/Makefile
> index c598b904938f..1bab8f1344b6 100644
> --- a/security/Makefile
> +++ b/security/Makefile
> @@ -28,6 +28,7 @@ obj-$(CONFIG_SECURITY_YAMA)		+= yama/
>  obj-$(CONFIG_SECURITY_LOADPIN)		+= loadpin/
>  obj-$(CONFIG_SECURITY_SAFESETID)       += safesetid/
>  obj-$(CONFIG_CGROUP_DEVICE)		+= device_cgroup.o
> +obj-$(CONFIG_INTEL_SGX)			+= lsm_ema.o

This belongs in a subdirectory.

>  
>  # Object integrity file lists
>  subdir-$(CONFIG_INTEGRITY)		+= integrity
> diff --git a/security/lsm_ema.c b/security/lsm_ema.c
> new file mode 100644
> index 000000000000..68fae0724d37
> --- /dev/null
> +++ b/security/lsm_ema.c
> @@ -0,0 +1,132 @@
> +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
> +// Copyright(c) 2016-18 Intel Corporation.
> +
> +#include <linux/lsm_ema.h>
> +#include <linux/slab.h>
> +
> +static struct kmem_cache *lsm_ema_cache;
> +static size_t lsm_ema_data_size;
> +static int lsm_ema_cache_decisions = 1;
> +
> +void lsm_ema_global_init(size_t ema_size)
> +{
> +	BUG_ON(lsm_ema_data_size > 0);
> +
> +	lsm_ema_data_size = ema_size;
> +
> +	ema_size += sizeof(struct lsm_ema);
> +	ema_size = max(ema_size, sizeof(struct lsm_ema_map));
> +	lsm_ema_cache = kmem_cache_create("lsm_ema_cache", ema_size,
> +					  __alignof__(struct lsm_ema),
> +					  SLAB_PANIC, NULL);
> +
> +}
> +
> +struct lsm_ema_map *lsm_init_or_get_ema_map(atomic_long_t *p)
> +{
> +	struct lsm_ema_map *map;
> +
> +	map = (typeof(map))atomic_long_read(p);
> +	if (!map) {
> +		long n;
> +
> +		map = (typeof(map))lsm_alloc_ema();
> +		if (!map)
> +			return NULL;
> +
> +		INIT_LIST_HEAD(&map->list);
> +		mutex_init(&map->lock);
> +
> +		n = atomic_long_cmpxchg(p, 0, (long)map);
> +		if (n) {
> +			atomic_long_t a;
> +			atomic_long_set(&a, (long)map);
> +			map = (typeof(map))n;
> +			lsm_free_ema_map(&a);
> +		}
> +	}
> +	return map;
> +}
> +
> +void lsm_free_ema_map(atomic_long_t *p)
> +{
> +	struct lsm_ema_map *map;
> +	struct lsm_ema *ema, *n;
> +
> +	map = (typeof(map))atomic_long_read(p);
> +	if (!map)
> +		return;
> +
> +	BUG_ON(mutex_is_locked(&map->lock));
> +
> +	list_for_each_entry_safe(ema, n, &map->list, link)
> +		lsm_free_ema(ema);
> +	kmem_cache_free(lsm_ema_cache, map);
> +}
> +
> +struct lsm_ema *lsm_alloc_ema(void)
> +{
> +	return kmem_cache_zalloc(lsm_ema_cache, GFP_KERNEL);
> +}
> +
> +void lsm_free_ema(struct lsm_ema *ema)
> +{
> +	list_del(&ema->link);
> +	if (ema->source)
> +		fput(ema->source);
> +	kmem_cache_free(lsm_ema_cache, ema);
> +}
> +
> +void lsm_init_ema(struct lsm_ema *ema, size_t start, size_t end,
> +		  struct file *source)
> +{
> +	INIT_LIST_HEAD(&ema->link);
> +	ema->start = start;
> +	ema->end = end;
> +	if (!lsm_ema_cache_decisions && source)
> +		ema->source = get_file(source);
> +}
> +
> +int lsm_merge_ema(struct lsm_ema *p, struct lsm_ema_map *map)
> +{
> +	struct lsm_ema *prev = list_prev_entry(p, link);
> +
> +	BUG_ON(!mutex_is_locked(&map->lock));
> +
> +	if (&prev->link == &map->list || prev->end != p->start ||
> +	    prev->source != p->source ||
> +	    memcmp(prev + 1, p + 1, lsm_ema_data_size))
> +		return 0;
> +
> +	p->start = prev->start;
> +	fput(prev->source);
> +	lsm_free_ema(prev);
> +	return 1;
> +}
> +
> +struct lsm_ema *lsm_split_ema(struct lsm_ema *p, size_t at,
> +			      struct lsm_ema_map *map)
> +{
> +	struct lsm_ema *n;
> +
> +	BUG_ON(!mutex_is_locked(&map->lock));
> +
> +	if (at <= p->start || at >= p->end)
> +		return p;
> +
> +	n = lsm_alloc_ema();
> +	if (likely(n)) {
> +		lsm_init_ema(n, p->start, at, p->source);
> +		memcpy(n + 1, p + 1, lsm_ema_data_size);
> +		p->start = at;
> +		list_add_tail(&n->link, &p->link);
> +	}
> +	return n;
> +}
> +
> +static int __init set_ema_cache_decisions(char *str)
> +{
> +	lsm_ema_cache_decisions = (strcmp(str, "0") && strcmp(str, "off"));
> +	return 1;
> +}
> +__setup("lsm.ema.cache_decisions=", set_ema_cache_decisions);
> diff --git a/security/security.c b/security/security.c
> index f493db0bf62a..d50883f18be2 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -17,6 +17,7 @@
>  #include <linux/init.h>
>  #include <linux/kernel.h>
>  #include <linux/lsm_hooks.h>
> +#include <linux/lsm_ema.h>
>  #include <linux/integrity.h>
>  #include <linux/ima.h>
>  #include <linux/evm.h>
> @@ -41,7 +42,9 @@ static struct kmem_cache *lsm_file_cache;
>  static struct kmem_cache *lsm_inode_cache;
>  
>  char *lsm_names;
> -static struct lsm_blob_sizes blob_sizes __lsm_ro_after_init;
> +static struct lsm_blob_sizes blob_sizes __lsm_ro_after_init = {
> +	.lbs_file = sizeof(atomic_long_t) * IS_ENABLED(CONFIG_INTEL_SGX),
> +};

This belongs in the module specific code.
It does not belong here.

>  
>  /* Boot-time LSM user choice */
>  static __initdata const char *chosen_lsm_order;
> @@ -169,6 +172,7 @@ static void __init lsm_set_blob_sizes(struct lsm_blob_sizes *needed)
>  	lsm_set_blob_size(&needed->lbs_ipc, &blob_sizes.lbs_ipc);
>  	lsm_set_blob_size(&needed->lbs_msg_msg, &blob_sizes.lbs_msg_msg);
>  	lsm_set_blob_size(&needed->lbs_task, &blob_sizes.lbs_task);
> +	lsm_set_blob_size(&needed->lbs_ema_data, &blob_sizes.lbs_ema_data);
>  }
>  
>  /* Prepare LSM for initialization. */
> @@ -314,6 +318,7 @@ static void __init ordered_lsm_init(void)
>  		lsm_inode_cache = kmem_cache_create("lsm_inode_cache",
>  						    blob_sizes.lbs_inode, 0,
>  						    SLAB_PANIC, NULL);
> +	lsm_ema_global_init(blob_sizes.lbs_ema_data);
>  
>  	lsm_early_cred((struct cred *) current->cred);
>  	lsm_early_task(current);
> @@ -1357,6 +1362,7 @@ void security_file_free(struct file *file)
>  	blob = file->f_security;
>  	if (blob) {
>  		file->f_security = NULL;
> +		lsm_free_ema_map(blob);
>  		kmem_cache_free(lsm_file_cache, blob);
>  	}
>  }
> @@ -1420,6 +1426,7 @@ int security_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
>  {
>  	return call_int_hook(file_mprotect, 0, vma, reqprot, prot);
>  }
> +EXPORT_SYMBOL(security_file_mprotect);
>  
>  int security_file_lock(struct file *file, unsigned int cmd)
>  {
> @@ -2355,3 +2362,41 @@ void security_bpf_prog_free(struct bpf_prog_aux *aux)
>  	call_void_hook(bpf_prog_free_security, aux);
>  }
>  #endif /* CONFIG_BPF_SYSCALL */
> +
> +#ifdef CONFIG_INTEL_SGX
> +int security_enclave_load(struct file *encl, size_t start, size_t end,
> +			  size_t flags, struct vm_area_struct *src)

You are mixing module specific code into the infrastructure.
All of this should be in the enclave code. None of it should be here.

> +{
> +	struct lsm_ema_map *map;
> +	struct lsm_ema *ema;
> +	int rc;
> +
> +	map = lsm_init_or_get_ema_map(encl->f_security);
> +	if (unlikely(!map))
> +		return -ENOMEM;
> +
> +	ema = lsm_alloc_ema();
> +	if (unlikely(!ema))
> +		return -ENOMEM;
> +
> +	lsm_init_ema(ema, start, end, src->vm_file);
> +	rc = call_int_hook(enclave_load, 0, encl, ema, flags, src);
> +	if (!rc)
> +		rc = lsm_lock_ema(map);
> +	if (!rc) {
> +		rc = lsm_insert_ema(map, ema);
> +		lsm_unlock_ema(map);
> +	}
> +	if (rc)
> +		lsm_free_ema(ema);
> +	return rc;
> +}
> +EXPORT_SYMBOL(security_enclave_load);
> +
> +int security_enclave_init(struct file *encl, struct sgx_sigstruct *sigstruct,
> +			  struct vm_area_struct *src)
> +{
> +	return call_int_hook(enclave_init, 0, encl, sigstruct, src);
> +}
> +EXPORT_SYMBOL(security_enclave_init);
> +#endif /* CONFIG_INTEL_SGX */
Xing, Cedric June 27, 2019, 10:52 p.m. UTC | #2
Hi Casey,

> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
> Sent: Thursday, June 27, 2019 3:07 PM
> 
> Don't use "lsm_ema". This isn't LSM infrastructure.
> Three letter abbreviations are easy to type, but are doomed to encounter
> conflicts and lead to confusion.
> I suggest that you use "enclave", because it doesn't start off
> conflicting with anything and is descriptive.
> 
> This code should not be mixed in with the LSM infrastructure.
> It should all be contained in its own module, under security/enclave.

lsm_ema is *intended* to be part of the LSM infrastructure. It is going to be shared among all LSMs that would like to track enclave pages and their origins. And they could be extended to store more information as deemed appropriate by the LSM module. The last patch of this series shows how to extend EMA inside SELinux.

> 
> > diff --git a/include/linux/lsm_ema.h b/include/linux/lsm_ema.h new
> > file mode 100644 index 000000000000..a09b8f96da05
> > --- /dev/null
> > +++ b/include/linux/lsm_ema.h
> 
> There's no need for this header to be used outside the enclave
> LSM. It should be "security/enclave/enclave.h"

This header file is supposed to be used by all LSM modules, similar to lsm_hooks.h. Hence it is placed in the same location.

> 
> 
> > @@ -0,0 +1,171 @@
> > +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
> > +/**
> > + * Enclave Memory Area interface for LSM modules
> > + *
> > + * Copyright(c) 2016-19 Intel Corporation.
> > + */
> > +
> > +#ifndef _LSM_EMA_H_
> > +#define _LSM_EMA_H_
> > +
> > +#include <linux/list.h>
> > +#include <linux/mutex.h>
> > +#include <linux/fs.h>
> > +#include <linux/file.h>
> > +
> > +/**
> > + * lsm_ema - LSM Enclave Memory Area structure
> 
> How about s/lsm_ema/enclave/ ?

I understand your suggestion, but this structure is shared among all LSMs. And I think lsm_ema is pretty descriptive without being too verbose.

> 
> > + *
> > + * Data structure to track origins of enclave pages
> > + *
> > + * @link:
> > + *	Link to adjacent EMAs. EMAs are sorted by their addresses in
> ascending
> > + *	order
> > + * @start:
> > + *	Starting address
> > + * @end:
> > + *	Ending address
> > + * @source:
> > + *	File from which this range was loaded from, or NULL if not loaded
> from
> > + *	any files
> > + */
> > +struct lsm_ema {
> > +	struct list_head	link;
> > +	size_t			start;
> > +	size_t			end;
> > +	struct file		*source;
> > +};
> > +
> > +#define lsm_ema_data(ema, blob_sizes)	\
> > +	((char *)((struct lsm_ema *)(ema) + 1) + blob_sizes.lbs_ema_data)
> 
> Who uses this? The enclave LSM? Convention would have this
> be selinux_enclave(ema) for the SELinux code. This is
> inconsistent with the way other blobs are handled.

This is to be used in various LSMs. As you can see in the last patch of this series, selinux_ema() is defined as a wrapper of this macro.

> 
> > +
> > +/**
> > + * lsm_ema_map - LSM Enclave Memory Map structure
> 
> enclave_map
> 
> > + *
> > + * Container for EMAs of an enclave
> > + *
> > + * @list:
> > + *	Head of a list of sorted EMAs
> > + * @lock:
> > + *	Acquire before querying/updateing the list EMAs
> > + */
> > +struct lsm_ema_map {
> > +	struct list_head	list;
> > +	struct mutex		lock;
> > +};
> > +
> > +/**
> > + * These are functions to be used by the LSM framework, and must be
> defined
> > + * regardless CONFIG_INTEL_SGX is enabled or not.
> 
> Not acceptable for the LSM infrastructure. They
> are inconsistent with the way data is used there.

I'm not sure I understand this comment.

> 
> > + */
> > +
> > +#ifdef CONFIG_INTEL_SGX
> > +void lsm_ema_global_init(size_t);
> > +void lsm_free_ema_map(atomic_long_t *);
> > +#else
> > +static inline void lsm_ema_global_init(size_t ema_data_size)
> > +{
> > +}
> > +
> > +static inline void lsm_free_ema_map(atomic_long_t *p)
> > +{
> > +}
> > +#endif
> > +
> > +/**
> > + * Below are APIs to be used by LSM modules
> > + */
> > +
> > +struct lsm_ema_map *lsm_init_or_get_ema_map(atomic_long_t *);
> > +struct lsm_ema *lsm_alloc_ema(void);
> 
> Do you mean security_alloc_enclave()?
> That would go into security/security.h

No. Neither lsm_alloc_ema() above, nor lsm_free_ema() below, is LSM hook. They are APIs to deal with EMAs.

> 
> > +void lsm_free_ema(struct lsm_ema *);
> 
> Do you mean security_free_enclave()?
> That would go into security/security.h
> 
> > +void lsm_init_ema(struct lsm_ema *, size_t, size_t, struct file *);
> 
> This goes in the enclave LSM.

There's NO enclave LSM. This patch is introducing new LSM hooks applicable to all LSM modules, but not introducing new LSM modules.

> 
> > +int lsm_merge_ema(struct lsm_ema *, struct lsm_ema_map *);
> > +struct lsm_ema *lsm_split_ema(struct lsm_ema *, size_t, struct
> lsm_ema_map *);
> > +
> > +static inline struct lsm_ema_map *lsm_get_ema_map(struct file *f)
> > +{
> > +	return (void *)atomic_long_read(f->f_security);
> > +}
> > +
> > +static inline int __must_check lsm_lock_ema(struct lsm_ema_map *map)
> > +{
> > +	return mutex_lock_interruptible(&map->lock);
> > +}
> > +
> > +static inline void lsm_unlock_ema(struct lsm_ema_map *map)
> > +{
> > +	mutex_unlock(&map->lock);
> > +}
> > +
> > +static inline struct lsm_ema *lsm_prev_ema(struct lsm_ema *p,
> > +					   struct lsm_ema_map *map)
> > +{
> > +	p = list_prev_entry(p, link);
> > +	return &p->link == &map->list ? NULL : p;
> > +}
> > +
> > +static inline struct lsm_ema *lsm_next_ema(struct lsm_ema *p,
> > +					   struct lsm_ema_map *map)
> > +{
> > +	p = list_next_entry(p, link);
> > +	return &p->link == &map->list ? NULL : p;
> > +}
> > +
> > +static inline struct lsm_ema *lsm_find_ema(struct lsm_ema_map *map,
> size_t a)
> > +{
> > +	struct lsm_ema *p;
> > +
> > +	BUG_ON(!mutex_is_locked(&map->lock));
> > +
> > +	list_for_each_entry(p, &map->list, link)
> > +		if (a < p->end)
> > +			break;
> > +	return &p->link == &map->list ? NULL : p;
> > +}
> > +
> > +static inline int lsm_insert_ema(struct lsm_ema_map *map, struct
> lsm_ema *n)
> > +{
> > +	struct lsm_ema *p = lsm_find_ema(map, n->start);
> > +
> > +	if (!p)
> > +		list_add_tail(&n->link, &map->list);
> > +	else if (n->end <= p->start)
> > +		list_add_tail(&n->link, &p->link);
> > +	else
> > +		return -EEXIST;
> > +
> > +	lsm_merge_ema(n, map);
> > +	if (p)
> > +		lsm_merge_ema(p, map);
> > +	return 0;
> > +}
> > +
> > +static inline int lsm_for_each_ema(struct lsm_ema_map *map, size_t
> start,
> > +				   size_t end, int (*cb)(struct lsm_ema *,
> > +							 void *), void *arg)
> > +{
> > +	struct lsm_ema *ema;
> > +	int rc;
> > +
> > +	ema = lsm_find_ema(map, start);
> > +	while (ema && end > ema->start) {
> > +		if (start > ema->start)
> > +			lsm_split_ema(ema, start, map);
> > +		if (end < ema->end)
> > +			ema = lsm_split_ema(ema, end, map);
> > +
> > +		rc = (*cb)(ema, arg);
> > +		lsm_merge_ema(ema, map);
> > +		if (rc)
> > +			return rc;
> > +
> > +		ema = lsm_next_ema(ema, map);
> > +	}
> > +
> > +	if (ema)
> > +		lsm_merge_ema(ema, map);
> > +	return 0;
> > +}
> 
> There is no way that these belong as part of the LSM
> infrastructure. If you need an enclave management API
> you need to find some other place for it.

They are NO enclave management APIs. They don't manage enclaves. They track origins of enclave pages. They are needed by all LSMs.

As I stated in the cover letter, the primary question is how to prevent SGX from being abused as a backdoor to make executable pages that would otherwise not be executable without SGX. Any LSM module unaware of that would leave that "hole" open. So tracking enclave pages will become a common task for all LSMs that care page protections, and that's why I place it inside LSM infrastructure.
Casey Schaufler June 27, 2019, 11:37 p.m. UTC | #3
On 6/27/2019 3:52 PM, Xing, Cedric wrote:
> Hi Casey,
>
>> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
>> Sent: Thursday, June 27, 2019 3:07 PM
>>
>> Don't use "lsm_ema". This isn't LSM infrastructure.
>> Three letter abbreviations are easy to type, but are doomed to encounter
>> conflicts and lead to confusion.
>> I suggest that you use "enclave", because it doesn't start off
>> conflicting with anything and is descriptive.
>>
>> This code should not be mixed in with the LSM infrastructure.
>> It should all be contained in its own module, under security/enclave.
> lsm_ema is *intended* to be part of the LSM infrastructure.

That's not going to fly, not for a minute.

>  It is going to be shared among all LSMs that would like to track enclave pages and their origins.

That's true for InfiniBand, tun and sctp as well. Look at their implementations.

> And they could be extended to store more information as deemed appropriate by the LSM module.

Which is what blobs are for, but that does not appear to be how
you're using either the file blob or your new ema blob.

>  The last patch of this series shows how to extend EMA inside SELinux.

I don't see (but I admit the code doesn't make a lot of sense to me)
anything you couldn't do in the SELinux code by adding data to the
file blob. The data you're adding to the LSM infrastructure doesn't
belong there, and it doesn't need to be there.

>
>>> diff --git a/include/linux/lsm_ema.h b/include/linux/lsm_ema.h new
>>> file mode 100644 index 000000000000..a09b8f96da05
>>> --- /dev/null
>>> +++ b/include/linux/lsm_ema.h
>> There's no need for this header to be used outside the enclave
>> LSM. It should be "security/enclave/enclave.h"
> This header file is supposed to be used by all LSM modules, similar to lsm_hooks.h. Hence it is placed in the same location.
>
>>
>>> @@ -0,0 +1,171 @@
>>> +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
>>> +/**
>>> + * Enclave Memory Area interface for LSM modules
>>> + *
>>> + * Copyright(c) 2016-19 Intel Corporation.
>>> + */
>>> +
>>> +#ifndef _LSM_EMA_H_
>>> +#define _LSM_EMA_H_
>>> +
>>> +#include <linux/list.h>
>>> +#include <linux/mutex.h>
>>> +#include <linux/fs.h>
>>> +#include <linux/file.h>
>>> +
>>> +/**
>>> + * lsm_ema - LSM Enclave Memory Area structure
>> How about s/lsm_ema/enclave/ ?
> I understand your suggestion, but this structure is shared among all LSMs. And I think lsm_ema is pretty descriptive without being too verbose.
>
>>> + *
>>> + * Data structure to track origins of enclave pages
>>> + *
>>> + * @link:
>>> + *	Link to adjacent EMAs. EMAs are sorted by their addresses in
>> ascending
>>> + *	order
>>> + * @start:
>>> + *	Starting address
>>> + * @end:
>>> + *	Ending address
>>> + * @source:
>>> + *	File from which this range was loaded from, or NULL if not loaded
>> from
>>> + *	any files
>>> + */
>>> +struct lsm_ema {
>>> +	struct list_head	link;
>>> +	size_t			start;
>>> +	size_t			end;
>>> +	struct file		*source;
>>> +};
>>> +
>>> +#define lsm_ema_data(ema, blob_sizes)	\
>>> +	((char *)((struct lsm_ema *)(ema) + 1) + blob_sizes.lbs_ema_data)
>> Who uses this? The enclave LSM? Convention would have this
>> be selinux_enclave(ema) for the SELinux code. This is
>> inconsistent with the way other blobs are handled.
> This is to be used in various LSMs. As you can see in the last patch of this series, selinux_ema() is defined as a wrapper of this macro.
>
>>> +
>>> +/**
>>> + * lsm_ema_map - LSM Enclave Memory Map structure
>> enclave_map
>>
>>> + *
>>> + * Container for EMAs of an enclave
>>> + *
>>> + * @list:
>>> + *	Head of a list of sorted EMAs
>>> + * @lock:
>>> + *	Acquire before querying/updateing the list EMAs
>>> + */
>>> +struct lsm_ema_map {
>>> +	struct list_head	list;
>>> +	struct mutex		lock;
>>> +};
>>> +
>>> +/**
>>> + * These are functions to be used by the LSM framework, and must be
>> defined
>>> + * regardless CONFIG_INTEL_SGX is enabled or not.
>> Not acceptable for the LSM infrastructure. They
>> are inconsistent with the way data is used there.
> I'm not sure I understand this comment.

It means that your definition and use of the lsm_ema_blob
does not match the way other blobs are managed and used.
The LSM infrastructure uses these entries in a very particular
way, and you're trying to use it differently. Your might be
able to change the rest of the enclave system to use it
correctly, or you might be able to find a different place
for it.


>>> + */
>>> +
>>> +#ifdef CONFIG_INTEL_SGX
>>> +void lsm_ema_global_init(size_t);
>>> +void lsm_free_ema_map(atomic_long_t *);
>>> +#else
>>> +static inline void lsm_ema_global_init(size_t ema_data_size)
>>> +{
>>> +}
>>> +
>>> +static inline void lsm_free_ema_map(atomic_long_t *p)
>>> +{
>>> +}
>>> +#endif
>>> +
>>> +/**
>>> + * Below are APIs to be used by LSM modules
>>> + */
>>> +
>>> +struct lsm_ema_map *lsm_init_or_get_ema_map(atomic_long_t *);
>>> +struct lsm_ema *lsm_alloc_ema(void);
>> Do you mean security_alloc_enclave()?
>> That would go into security/security.h
> No. Neither lsm_alloc_ema() above, nor lsm_free_ema() below, is LSM hook. They are APIs to deal with EMAs.
>
>>> +void lsm_free_ema(struct lsm_ema *);
>> Do you mean security_free_enclave()?
>> That would go into security/security.h
>>
>>> +void lsm_init_ema(struct lsm_ema *, size_t, size_t, struct file *);
>> This goes in the enclave LSM.
> There's NO enclave LSM. This patch is introducing new LSM hooks applicable to all LSM modules, but not introducing new LSM modules.
>
>>> +int lsm_merge_ema(struct lsm_ema *, struct lsm_ema_map *);
>>> +struct lsm_ema *lsm_split_ema(struct lsm_ema *, size_t, struct
>> lsm_ema_map *);
>>> +
>>> +static inline struct lsm_ema_map *lsm_get_ema_map(struct file *f)
>>> +{
>>> +	return (void *)atomic_long_read(f->f_security);
>>> +}
>>> +
>>> +static inline int __must_check lsm_lock_ema(struct lsm_ema_map *map)
>>> +{
>>> +	return mutex_lock_interruptible(&map->lock);
>>> +}
>>> +
>>> +static inline void lsm_unlock_ema(struct lsm_ema_map *map)
>>> +{
>>> +	mutex_unlock(&map->lock);
>>> +}
>>> +
>>> +static inline struct lsm_ema *lsm_prev_ema(struct lsm_ema *p,
>>> +					   struct lsm_ema_map *map)
>>> +{
>>> +	p = list_prev_entry(p, link);
>>> +	return &p->link == &map->list ? NULL : p;
>>> +}
>>> +
>>> +static inline struct lsm_ema *lsm_next_ema(struct lsm_ema *p,
>>> +					   struct lsm_ema_map *map)
>>> +{
>>> +	p = list_next_entry(p, link);
>>> +	return &p->link == &map->list ? NULL : p;
>>> +}
>>> +
>>> +static inline struct lsm_ema *lsm_find_ema(struct lsm_ema_map *map,
>> size_t a)
>>> +{
>>> +	struct lsm_ema *p;
>>> +
>>> +	BUG_ON(!mutex_is_locked(&map->lock));
>>> +
>>> +	list_for_each_entry(p, &map->list, link)
>>> +		if (a < p->end)
>>> +			break;
>>> +	return &p->link == &map->list ? NULL : p;
>>> +}
>>> +
>>> +static inline int lsm_insert_ema(struct lsm_ema_map *map, struct
>> lsm_ema *n)
>>> +{
>>> +	struct lsm_ema *p = lsm_find_ema(map, n->start);
>>> +
>>> +	if (!p)
>>> +		list_add_tail(&n->link, &map->list);
>>> +	else if (n->end <= p->start)
>>> +		list_add_tail(&n->link, &p->link);
>>> +	else
>>> +		return -EEXIST;
>>> +
>>> +	lsm_merge_ema(n, map);
>>> +	if (p)
>>> +		lsm_merge_ema(p, map);
>>> +	return 0;
>>> +}
>>> +
>>> +static inline int lsm_for_each_ema(struct lsm_ema_map *map, size_t
>> start,
>>> +				   size_t end, int (*cb)(struct lsm_ema *,
>>> +							 void *), void *arg)
>>> +{
>>> +	struct lsm_ema *ema;
>>> +	int rc;
>>> +
>>> +	ema = lsm_find_ema(map, start);
>>> +	while (ema && end > ema->start) {
>>> +		if (start > ema->start)
>>> +			lsm_split_ema(ema, start, map);
>>> +		if (end < ema->end)
>>> +			ema = lsm_split_ema(ema, end, map);
>>> +
>>> +		rc = (*cb)(ema, arg);
>>> +		lsm_merge_ema(ema, map);
>>> +		if (rc)
>>> +			return rc;
>>> +
>>> +		ema = lsm_next_ema(ema, map);
>>> +	}
>>> +
>>> +	if (ema)
>>> +		lsm_merge_ema(ema, map);
>>> +	return 0;
>>> +}
>> There is no way that these belong as part of the LSM
>> infrastructure. If you need an enclave management API
>> you need to find some other place for it.
> They are NO enclave management APIs. They don't manage enclaves. They track origins of enclave pages. They are needed by all LSMs.
>
> As I stated in the cover letter, the primary question is how to prevent SGX from being abused as a backdoor to make executable pages that would otherwise not be executable without SGX. Any LSM module unaware of that would leave that "hole" open. So tracking enclave pages will become a common task for all LSMs that care page protections, and that's why I place it inside LSM infrastructure.

Page protections are an important part of many security features,
but that's beside the point. The LSM system provides mechanism for
providing additional restrictions to existing security mechanisms.
First, you create the security mechanism (e.g. enclaves) then you
add LSM hooks so that security modules (e.g. SELinux) can apply
their own policies in addition. In support of this, the LSM blob
mechanism allows security modules to maintain their own information
about the system components (e.g. file, inode, cred, task) they
care about. The LSM infrastructure does not itself provide or
support security data or policy. That's strictly for the modules
to do.
Xing, Cedric June 28, 2019, 12:47 a.m. UTC | #4
> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
> Sent: Thursday, June 27, 2019 4:37 PM
> >>
> >> This code should not be mixed in with the LSM infrastructure.
> >> It should all be contained in its own module, under security/enclave.
> > lsm_ema is *intended* to be part of the LSM infrastructure.
> 
> That's not going to fly, not for a minute.

Why not, if there's a need for it?

And what's the concern here if it becomes part of the LSM infrastructure.

> 
> >  It is going to be shared among all LSMs that would like to track
> enclave pages and their origins.
> 
> That's true for InfiniBand, tun and sctp as well. Look at their
> implementations.

As far as I can tell, InfiniBand, tun and sctp, all of them seemed used inside SELinux only.

If you had a chance to look at v1 of my series, I started by burying everything inside SELinux too. But Stephen pointed out such tracking would be needed by all LSMs so code duplication might be a concern. Thus I responded by moving it into LSM infrastructure.

> 
> > And they could be extended to store more information as deemed
> appropriate by the LSM module.
> 
> Which is what blobs are for, but that does not appear to be how
> you're using either the file blob or your new ema blob.

A lsm_ema_map pointer is stored in file->f_security. Each lsm_ema_map contains a list of lsm_ema structures. In my last patch, SELinux stores a ema_security_struct with every ema, by setting selinux_blob_sizes.lbs_ema_data to sizeof(ema_security_struct).

ema_security_struct is initialized in selinux_enclave_load(), and checked in enclave_mprotect(), which is a subroutine of selinux_file_mprotect(). BTW, it is alloced/freed automatically by LSM infrastructure in security_enclave_load()/security_file_free().

> 
> >  The last patch of this series shows how to extend EMA inside SELinux.
> 
> I don't see (but I admit the code doesn't make a lot of sense to me)
> anything you couldn't do in the SELinux code by adding data to the
> file blob. The data you're adding to the LSM infrastructure doesn't
> belong there, and it doesn't need to be there.

You are correct. My v1 did it inside SELinux.

The key question I think is whether only SELinux needs it, or all LSMs need it. Stephen thought it was the latter (and I agree with him) so I moved it into the LSM infrastructure to be shared, just like the auditing code.

> >> Not acceptable for the LSM infrastructure. They
> >> are inconsistent with the way data is used there.
> > I'm not sure I understand this comment.
> 
> It means that your definition and use of the lsm_ema_blob
> does not match the way other blobs are managed and used.
> The LSM infrastructure uses these entries in a very particular
> way, and you're trying to use it differently. Your might be
> able to change the rest of the enclave system to use it
> correctly, or you might be able to find a different place
> for it.

I'm still not sure why you think this (lbs_ema_data) is inconsistent with other blobs. 

Same as all other blobs, an LSM requests it by storing the needed size in it, and is assigned an offset, and the buffer is allocated/freed by the infrastructure. Am I missing anything?

> >
> > As I stated in the cover letter, the primary question is how to
> prevent SGX from being abused as a backdoor to make executable pages
> that would otherwise not be executable without SGX. Any LSM module
> unaware of that would leave that "hole" open. So tracking enclave pages
> will become a common task for all LSMs that care page protections, and
> that's why I place it inside LSM infrastructure.
> 
> Page protections are an important part of many security features,
> but that's beside the point. The LSM system provides mechanism for
> providing additional restrictions to existing security mechanisms.
> First, you create the security mechanism (e.g. enclaves) then you
> add LSM hooks so that security modules (e.g. SELinux) can apply
> their own policies in addition. In support of this, the LSM blob
> mechanism allows security modules to maintain their own information
> about the system components (e.g. file, inode, cred, task) they
> care about. The LSM infrastructure does not itself provide or
> support security data or policy. That's strictly for the modules
> to do.

Agreed!

EMA doesn't dictate policies for sure. Is it considered "security data"? I'm not sure the definition of "security data" here. It does store some "data", something that multiple LSM modules would need to duplicate if not pulled into a common place. It is meant to be a "helper" data structure, just like the auditing code.
Stephen Smalley June 28, 2019, 4:37 p.m. UTC | #5
On 6/27/19 2:56 PM, Cedric Xing wrote:
> SGX enclaves are loaded from pages in regular memory. Given the ability to
> create executable pages, the newly added SGX subsystem may present a backdoor
> for adversaries to circumvent LSM policies, such as creating an executable
> enclave page from a modified regular page that would otherwise not be made
> executable as prohibited by LSM. Therefore arises the primary question of
> whether an enclave page should be allowed to be created from a given source
> page in regular memory.
> 
> A related question is whether to grant/deny a mprotect() request on a given
> enclave page/range. mprotect() is traditionally covered by
> security_file_mprotect() hook, however, enclave pages have a different lifespan
> than either MAP_PRIVATE or MAP_SHARED. Particularly, MAP_PRIVATE pages have the
> same lifespan as the VMA while MAP_SHARED pages have the same lifespan as the
> backing file (on disk), but enclave pages have the lifespan of the enclave’s
> file descriptor. For example, enclave pages could be munmap()’ed then mmap()’ed
> again without losing contents (like MAP_SHARED), but all enclave pages will be
> lost once its file descriptor has been closed (like MAP_PRIVATE). That said,
> LSM modules need some new data structure for tracking protections of enclave
> pages/ranges so that they can make proper decisions at mmap()/mprotect()
> syscalls.
> 
> The last question, which is orthogonal to the 2 above, is whether or not to
> allow a given enclave to launch/run. Enclave pages are not visible to the rest
> of the system, so to some extent offer a better place for malicious software to
> hide. Thus, it is sometimes desirable to whitelist/blacklist enclaves by their
> measurements, signing public keys, or image files.
> 
> To address the questions above, 2 new LSM hooks are added for enclaves.
>    - security_enclave_load() – This hook allows LSM to decide whether or not to
>      allow instantiation of a range of enclave pages using the specified VMA. It
>      is invoked when a range of enclave pages is about to be loaded. It serves 3
>      purposes: 1) indicate to LSM that the file struct in subject is an enclave;
>      2) allow LSM to decide whether or not to instantiate those pages and 3)
>      allow LSM to initialize internal data structures for tracking
>      origins/protections of those pages.
>    - security_enclave_init() – This hook allows whitelisting/blacklisting or
>      performing whatever checks deemed appropriate before an enclave is allowed
>      to run. An LSM module may opt to use the file backing the SIGSTRUCT as a
>      proxy to dictate allowed protections for anonymous pages.
> 
> mprotect() of enclave pages continue to be governed by
> security_file_mprotect(), with the expectation that LSM is able to distinguish
> between regular and enclave pages inside the hook. For mmap(), the SGX
> subsystem is expected to invoke security_file_mprotect() explicitly to check
> protections against the requested protections for existing enclave pages. As
> stated earlier, enclave pages have different lifespan than the existing
> MAP_PRIVATE and MAP_SHARED pages, so would require a new data structure outside
> of VMA to track their protections and/or origins. Enclave Memory Area (or EMA
> for short) has been introduced to address the need. EMAs are maintained by the
> LSM framework for all LSM modules to share. EMAs will be instantiated for
> enclaves only so will not impose memory/performance overheads for regular
> applications/files. Please see include/linux/lsm_ema.h and security/lsm_ema.c
> for details.
> 
> A new setup parameter – lsm.ema.cache_decisions has been introduced to offer
> the choice between memory consumption and accuracy of audit logs. Enabling
> lsm.ema.cache_decisions causes LSM framework NOT to keep backing files open for
> EMAs. While that saves memory, it requires LSM modules to make and cache
> decisions ahead of time, and makes it difficult for LSM modules to generate
> accurate audit logs. System administrators are expected to run LSM in
> permissive mode with lsm.ema.cache_decisions off to determine the minimal
> permissions needed, and then turn it back on in enforcing mode for optimal
> performance and memory usage. lsm.ema.cache_decisions is on by default and
> could be turned off by appending “lsm.ema.cache_decisions=0” or
> “lsm.ema.cache_decisions=off” to the kernel command line.

This seems problematic on a few fronts:

- Specifying it as a boot parameter requires teaching admins / policy 
developers to do something in addition to what they are already doing 
when they want to create policy,

- Limiting it to a boot parameter requires a reboot to change the mode 
of operation, whereas SELinux offers runtime toggling of permissive mode 
and even per-process (domain) permissive mode (and so does AppArmor),

- In the cache_decisions=1 case, do we get any auditing at all?  If not, 
that's a problem.  We want auditing not only when we are 
generating/learning policy but also in operation.

- There is the potential for inconsistencies to arise between the 
enforcement applied with different cache_decisions values.

I would suggest that we just never cache the decision and accept the 
cost if we are going to take this approach.

> 
> Signed-off-by: Cedric Xing <cedric.xing@intel.com>
> ---
>   include/linux/lsm_ema.h   | 171 ++++++++++++++++++++++++++++++++++++++
>   include/linux/lsm_hooks.h |  29 +++++++
>   include/linux/security.h  |  23 +++++
>   security/Makefile         |   1 +
>   security/lsm_ema.c        | 132 +++++++++++++++++++++++++++++
>   security/security.c       |  47 ++++++++++-
>   6 files changed, 402 insertions(+), 1 deletion(-)
>   create mode 100644 include/linux/lsm_ema.h
>   create mode 100644 security/lsm_ema.c
> 
> diff --git a/include/linux/lsm_ema.h b/include/linux/lsm_ema.h
> new file mode 100644
> index 000000000000..a09b8f96da05
> --- /dev/null
> +++ b/include/linux/lsm_ema.h
> @@ -0,0 +1,171 @@
> +/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
> +/**
> + * Enclave Memory Area interface for LSM modules
> + *
> + * Copyright(c) 2016-19 Intel Corporation.
> + */
> +
> +#ifndef _LSM_EMA_H_
> +#define _LSM_EMA_H_
> +
> +#include <linux/list.h>
> +#include <linux/mutex.h>
> +#include <linux/fs.h>
> +#include <linux/file.h>
> +
> +/**
> + * lsm_ema - LSM Enclave Memory Area structure
> + *
> + * Data structure to track origins of enclave pages
> + *
> + * @link:
> + *	Link to adjacent EMAs. EMAs are sorted by their addresses in ascending
> + *	order
> + * @start:
> + *	Starting address
> + * @end:
> + *	Ending address
> + * @source:
> + *	File from which this range was loaded from, or NULL if not loaded from
> + *	any files
> + */
> +struct lsm_ema {
> +	struct list_head	link;
> +	size_t			start;
> +	size_t			end;
> +	struct file		*source;
> +};
> +
> +#define lsm_ema_data(ema, blob_sizes)	\
> +	((char *)((struct lsm_ema *)(ema) + 1) + blob_sizes.lbs_ema_data)
> +
> +/**
> + * lsm_ema_map - LSM Enclave Memory Map structure
> + *
> + * Container for EMAs of an enclave
> + *
> + * @list:
> + *	Head of a list of sorted EMAs
> + * @lock:
> + *	Acquire before querying/updateing the list EMAs
> + */
> +struct lsm_ema_map {
> +	struct list_head	list;
> +	struct mutex		lock;
> +};
> +
> +/**
> + * These are functions to be used by the LSM framework, and must be defined
> + * regardless CONFIG_INTEL_SGX is enabled or not.
> + */
> +
> +#ifdef CONFIG_INTEL_SGX
> +void lsm_ema_global_init(size_t);
> +void lsm_free_ema_map(atomic_long_t *);
> +#else
> +static inline void lsm_ema_global_init(size_t ema_data_size)
> +{
> +}
> +
> +static inline void lsm_free_ema_map(atomic_long_t *p)
> +{
> +}
> +#endif
> +
> +/**
> + * Below are APIs to be used by LSM modules
> + */
> +
> +struct lsm_ema_map *lsm_init_or_get_ema_map(atomic_long_t *);
> +struct lsm_ema *lsm_alloc_ema(void);
> +void lsm_free_ema(struct lsm_ema *);
> +void lsm_init_ema(struct lsm_ema *, size_t, size_t, struct file *);
> +int lsm_merge_ema(struct lsm_ema *, struct lsm_ema_map *);
> +struct lsm_ema *lsm_split_ema(struct lsm_ema *, size_t, struct lsm_ema_map *);
> +
> +static inline struct lsm_ema_map *lsm_get_ema_map(struct file *f)
> +{
> +	return (void *)atomic_long_read(f->f_security);
> +}
> +
> +static inline int __must_check lsm_lock_ema(struct lsm_ema_map *map)
> +{
> +	return mutex_lock_interruptible(&map->lock);
> +}
> +
> +static inline void lsm_unlock_ema(struct lsm_ema_map *map)
> +{
> +	mutex_unlock(&map->lock);
> +}
> +
> +static inline struct lsm_ema *lsm_prev_ema(struct lsm_ema *p,
> +					   struct lsm_ema_map *map)
> +{
> +	p = list_prev_entry(p, link);
> +	return &p->link == &map->list ? NULL : p;
> +}
> +
> +static inline struct lsm_ema *lsm_next_ema(struct lsm_ema *p,
> +					   struct lsm_ema_map *map)
> +{
> +	p = list_next_entry(p, link);
> +	return &p->link == &map->list ? NULL : p;
> +}
> +
> +static inline struct lsm_ema *lsm_find_ema(struct lsm_ema_map *map, size_t a)
> +{
> +	struct lsm_ema *p;
> +
> +	BUG_ON(!mutex_is_locked(&map->lock));
> +
> +	list_for_each_entry(p, &map->list, link)
> +		if (a < p->end)
> +			break;
> +	return &p->link == &map->list ? NULL : p;
> +}
> +
> +static inline int lsm_insert_ema(struct lsm_ema_map *map, struct lsm_ema *n)
> +{
> +	struct lsm_ema *p = lsm_find_ema(map, n->start);
> +
> +	if (!p)
> +		list_add_tail(&n->link, &map->list);
> +	else if (n->end <= p->start)
> +		list_add_tail(&n->link, &p->link);
> +	else
> +		return -EEXIST;
> +
> +	lsm_merge_ema(n, map);
> +	if (p)
> +		lsm_merge_ema(p, map);
> +	return 0;
> +}
> +
> +static inline int lsm_for_each_ema(struct lsm_ema_map *map, size_t start,
> +				   size_t end, int (*cb)(struct lsm_ema *,
> +							 void *), void *arg)
> +{
> +	struct lsm_ema *ema;
> +	int rc;
> +
> +	ema = lsm_find_ema(map, start);
> +	while (ema && end > ema->start) {
> +		if (start > ema->start)
> +			lsm_split_ema(ema, start, map);
> +		if (end < ema->end)
> +			ema = lsm_split_ema(ema, end, map);
> +
> +		rc = (*cb)(ema, arg);
> +		lsm_merge_ema(ema, map);
> +		if (rc)
> +			return rc;
> +
> +		ema = lsm_next_ema(ema, map);
> +	}
> +
> +	if (ema)
> +		lsm_merge_ema(ema, map);
> +	return 0;
> +}
> +
> +#endif /* _LSM_EMA_H_ */
> diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
> index 47f58cfb6a19..ade1f9f81e64 100644
> --- a/include/linux/lsm_hooks.h
> +++ b/include/linux/lsm_hooks.h
> @@ -29,6 +29,8 @@
>   #include <linux/init.h>
>   #include <linux/rculist.h>
>   
> +struct lsm_ema;
> +
>   /**
>    * union security_list_options - Linux Security Module hook function list
>    *
> @@ -1446,6 +1448,21 @@
>    * @bpf_prog_free_security:
>    *	Clean up the security information stored inside bpf prog.
>    *
> + * @enclave_load:
> + *	Decide if a range of pages shall be allowed to be loaded into an
> + *	enclave
> + *
> + *	@encl points to the file identifying the target enclave
> + *	@ema specifies the target range to be loaded
> + *	@flags contains protections being requested for the target range
> + *	@source points to the VMA containing the source pages to be loaded
> + *
> + * @enclave_init:
> + *	Decide if an enclave shall be allowed to launch
> + *
> + *	@encl points to the file identifying the target enclave being launched
> + *	@sigstruct contains a copy of the SIGSTRUCT in kernel memory
> + *	@source points to the VMA backing SIGSTRUCT in user memory
>    */
>   union security_list_options {
>   	int (*binder_set_context_mgr)(struct task_struct *mgr);
> @@ -1807,6 +1824,13 @@ union security_list_options {
>   	int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux);
>   	void (*bpf_prog_free_security)(struct bpf_prog_aux *aux);
>   #endif /* CONFIG_BPF_SYSCALL */
> +
> +#ifdef CONFIG_INTEL_SGX
> +	int (*enclave_load)(struct file *encl, struct lsm_ema *ema,
> +			    size_t flags, struct vm_area_struct *source);
> +	int (*enclave_init)(struct file *encl, struct sgx_sigstruct *sigstruct,
> +			    struct vm_area_struct *source);
> +#endif
>   };
>   
>   struct security_hook_heads {
> @@ -2046,6 +2070,10 @@ struct security_hook_heads {
>   	struct hlist_head bpf_prog_alloc_security;
>   	struct hlist_head bpf_prog_free_security;
>   #endif /* CONFIG_BPF_SYSCALL */
> +#ifdef CONFIG_INTEL_SGX
> +	struct hlist_head enclave_load;
> +	struct hlist_head enclave_init;
> +#endif
>   } __randomize_layout;
>   
>   /*
> @@ -2069,6 +2097,7 @@ struct lsm_blob_sizes {
>   	int	lbs_ipc;
>   	int	lbs_msg_msg;
>   	int	lbs_task;
> +	int	lbs_ema_data;
>   };
>   
>   /*
> diff --git a/include/linux/security.h b/include/linux/security.h
> index 659071c2e57c..52c200810004 100644
> --- a/include/linux/security.h
> +++ b/include/linux/security.h
> @@ -1829,5 +1829,28 @@ static inline void security_bpf_prog_free(struct bpf_prog_aux *aux)
>   #endif /* CONFIG_SECURITY */
>   #endif /* CONFIG_BPF_SYSCALL */
>   
> +#ifdef CONFIG_INTEL_SGX
> +struct sgx_sigstruct;
> +#ifdef CONFIG_SECURITY
> +int security_enclave_load(struct file *encl, size_t start, size_t end,
> +			  size_t flags, struct vm_area_struct *source);
> +int security_enclave_init(struct file *encl, struct sgx_sigstruct *sigstruct,
> +			  struct vm_area_struct *source);
> +#else
> +static inline int security_enclave_load(struct file *encl, size_t start,
> +					size_t end, struct vm_area_struct *src)
> +{
> +	return 0;
> +}
> +
> +static inline int security_enclave_init(struct file *encl,
> +					struct sgx_sigstruct *sigstruct,
> +					struct vm_area_struct *src)
> +{
> +	return 0;
> +}
> +#endif /* CONFIG_SECURITY */
> +#endif /* CONFIG_INTEL_SGX */
> +
>   #endif /* ! __LINUX_SECURITY_H */
>   
> diff --git a/security/Makefile b/security/Makefile
> index c598b904938f..1bab8f1344b6 100644
> --- a/security/Makefile
> +++ b/security/Makefile
> @@ -28,6 +28,7 @@ obj-$(CONFIG_SECURITY_YAMA)		+= yama/
>   obj-$(CONFIG_SECURITY_LOADPIN)		+= loadpin/
>   obj-$(CONFIG_SECURITY_SAFESETID)       += safesetid/
>   obj-$(CONFIG_CGROUP_DEVICE)		+= device_cgroup.o
> +obj-$(CONFIG_INTEL_SGX)			+= lsm_ema.o
>   
>   # Object integrity file lists
>   subdir-$(CONFIG_INTEGRITY)		+= integrity
> diff --git a/security/lsm_ema.c b/security/lsm_ema.c
> new file mode 100644
> index 000000000000..68fae0724d37
> --- /dev/null
> +++ b/security/lsm_ema.c
> @@ -0,0 +1,132 @@
> +// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
> +// Copyright(c) 2016-18 Intel Corporation.
> +
> +#include <linux/lsm_ema.h>
> +#include <linux/slab.h>
> +
> +static struct kmem_cache *lsm_ema_cache;
> +static size_t lsm_ema_data_size;
> +static int lsm_ema_cache_decisions = 1;
> +
> +void lsm_ema_global_init(size_t ema_size)
> +{
> +	BUG_ON(lsm_ema_data_size > 0);
> +
> +	lsm_ema_data_size = ema_size;
> +
> +	ema_size += sizeof(struct lsm_ema);
> +	ema_size = max(ema_size, sizeof(struct lsm_ema_map));
> +	lsm_ema_cache = kmem_cache_create("lsm_ema_cache", ema_size,
> +					  __alignof__(struct lsm_ema),
> +					  SLAB_PANIC, NULL);
> +
> +}
> +
> +struct lsm_ema_map *lsm_init_or_get_ema_map(atomic_long_t *p)
> +{
> +	struct lsm_ema_map *map;
> +
> +	map = (typeof(map))atomic_long_read(p);
> +	if (!map) {
> +		long n;
> +
> +		map = (typeof(map))lsm_alloc_ema();
> +		if (!map)
> +			return NULL;
> +
> +		INIT_LIST_HEAD(&map->list);
> +		mutex_init(&map->lock);
> +
> +		n = atomic_long_cmpxchg(p, 0, (long)map);
> +		if (n) {
> +			atomic_long_t a;
> +			atomic_long_set(&a, (long)map);
> +			map = (typeof(map))n;
> +			lsm_free_ema_map(&a);
> +		}
> +	}
> +	return map;
> +}
> +
> +void lsm_free_ema_map(atomic_long_t *p)
> +{
> +	struct lsm_ema_map *map;
> +	struct lsm_ema *ema, *n;
> +
> +	map = (typeof(map))atomic_long_read(p);
> +	if (!map)
> +		return;
> +
> +	BUG_ON(mutex_is_locked(&map->lock));
> +
> +	list_for_each_entry_safe(ema, n, &map->list, link)
> +		lsm_free_ema(ema);
> +	kmem_cache_free(lsm_ema_cache, map);
> +}
> +
> +struct lsm_ema *lsm_alloc_ema(void)
> +{
> +	return kmem_cache_zalloc(lsm_ema_cache, GFP_KERNEL);
> +}
> +
> +void lsm_free_ema(struct lsm_ema *ema)
> +{
> +	list_del(&ema->link);
> +	if (ema->source)
> +		fput(ema->source);
> +	kmem_cache_free(lsm_ema_cache, ema);
> +}
> +
> +void lsm_init_ema(struct lsm_ema *ema, size_t start, size_t end,
> +		  struct file *source)
> +{
> +	INIT_LIST_HEAD(&ema->link);
> +	ema->start = start;
> +	ema->end = end;
> +	if (!lsm_ema_cache_decisions && source)
> +		ema->source = get_file(source);
> +}
> +
> +int lsm_merge_ema(struct lsm_ema *p, struct lsm_ema_map *map)
> +{
> +	struct lsm_ema *prev = list_prev_entry(p, link);
> +
> +	BUG_ON(!mutex_is_locked(&map->lock));
> +
> +	if (&prev->link == &map->list || prev->end != p->start ||
> +	    prev->source != p->source ||
> +	    memcmp(prev + 1, p + 1, lsm_ema_data_size))
> +		return 0;
> +
> +	p->start = prev->start;
> +	fput(prev->source);
> +	lsm_free_ema(prev);
> +	return 1;
> +}
> +
> +struct lsm_ema *lsm_split_ema(struct lsm_ema *p, size_t at,
> +			      struct lsm_ema_map *map)
> +{
> +	struct lsm_ema *n;
> +
> +	BUG_ON(!mutex_is_locked(&map->lock));
> +
> +	if (at <= p->start || at >= p->end)
> +		return p;
> +
> +	n = lsm_alloc_ema();
> +	if (likely(n)) {
> +		lsm_init_ema(n, p->start, at, p->source);
> +		memcpy(n + 1, p + 1, lsm_ema_data_size);
> +		p->start = at;
> +		list_add_tail(&n->link, &p->link);
> +	}
> +	return n;
> +}
> +
> +static int __init set_ema_cache_decisions(char *str)
> +{
> +	lsm_ema_cache_decisions = (strcmp(str, "0") && strcmp(str, "off"));
> +	return 1;
> +}
> +__setup("lsm.ema.cache_decisions=", set_ema_cache_decisions);
> diff --git a/security/security.c b/security/security.c
> index f493db0bf62a..d50883f18be2 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -17,6 +17,7 @@
>   #include <linux/init.h>
>   #include <linux/kernel.h>
>   #include <linux/lsm_hooks.h>
> +#include <linux/lsm_ema.h>
>   #include <linux/integrity.h>
>   #include <linux/ima.h>
>   #include <linux/evm.h>
> @@ -41,7 +42,9 @@ static struct kmem_cache *lsm_file_cache;
>   static struct kmem_cache *lsm_inode_cache;
>   
>   char *lsm_names;
> -static struct lsm_blob_sizes blob_sizes __lsm_ro_after_init;
> +static struct lsm_blob_sizes blob_sizes __lsm_ro_after_init = {
> +	.lbs_file = sizeof(atomic_long_t) * IS_ENABLED(CONFIG_INTEL_SGX),
> +};
>   
>   /* Boot-time LSM user choice */
>   static __initdata const char *chosen_lsm_order;
> @@ -169,6 +172,7 @@ static void __init lsm_set_blob_sizes(struct lsm_blob_sizes *needed)
>   	lsm_set_blob_size(&needed->lbs_ipc, &blob_sizes.lbs_ipc);
>   	lsm_set_blob_size(&needed->lbs_msg_msg, &blob_sizes.lbs_msg_msg);
>   	lsm_set_blob_size(&needed->lbs_task, &blob_sizes.lbs_task);
> +	lsm_set_blob_size(&needed->lbs_ema_data, &blob_sizes.lbs_ema_data);
>   }
>   
>   /* Prepare LSM for initialization. */
> @@ -314,6 +318,7 @@ static void __init ordered_lsm_init(void)
>   		lsm_inode_cache = kmem_cache_create("lsm_inode_cache",
>   						    blob_sizes.lbs_inode, 0,
>   						    SLAB_PANIC, NULL);
> +	lsm_ema_global_init(blob_sizes.lbs_ema_data);
>   
>   	lsm_early_cred((struct cred *) current->cred);
>   	lsm_early_task(current);
> @@ -1357,6 +1362,7 @@ void security_file_free(struct file *file)
>   	blob = file->f_security;
>   	if (blob) {
>   		file->f_security = NULL;
> +		lsm_free_ema_map(blob);
>   		kmem_cache_free(lsm_file_cache, blob);
>   	}
>   }
> @@ -1420,6 +1426,7 @@ int security_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
>   {
>   	return call_int_hook(file_mprotect, 0, vma, reqprot, prot);
>   }
> +EXPORT_SYMBOL(security_file_mprotect);
>   
>   int security_file_lock(struct file *file, unsigned int cmd)
>   {
> @@ -2355,3 +2362,41 @@ void security_bpf_prog_free(struct bpf_prog_aux *aux)
>   	call_void_hook(bpf_prog_free_security, aux);
>   }
>   #endif /* CONFIG_BPF_SYSCALL */
> +
> +#ifdef CONFIG_INTEL_SGX
> +int security_enclave_load(struct file *encl, size_t start, size_t end,
> +			  size_t flags, struct vm_area_struct *src)
> +{
> +	struct lsm_ema_map *map;
> +	struct lsm_ema *ema;
> +	int rc;
> +
> +	map = lsm_init_or_get_ema_map(encl->f_security);
> +	if (unlikely(!map))
> +		return -ENOMEM;
> +
> +	ema = lsm_alloc_ema();
> +	if (unlikely(!ema))
> +		return -ENOMEM;
> +
> +	lsm_init_ema(ema, start, end, src->vm_file);
> +	rc = call_int_hook(enclave_load, 0, encl, ema, flags, src);
> +	if (!rc)
> +		rc = lsm_lock_ema(map);
> +	if (!rc) {
> +		rc = lsm_insert_ema(map, ema);
> +		lsm_unlock_ema(map);
> +	}
> +	if (rc)
> +		lsm_free_ema(ema);
> +	return rc;
> +}
> +EXPORT_SYMBOL(security_enclave_load);
> +
> +int security_enclave_init(struct file *encl, struct sgx_sigstruct *sigstruct,
> +			  struct vm_area_struct *src)
> +{
> +	return call_int_hook(enclave_init, 0, encl, sigstruct, src);
> +}
> +EXPORT_SYMBOL(security_enclave_init);
> +#endif /* CONFIG_INTEL_SGX */
>
Casey Schaufler June 28, 2019, 5:22 p.m. UTC | #6
On 6/27/2019 5:47 PM, Xing, Cedric wrote:
>> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
>> Sent: Thursday, June 27, 2019 4:37 PM
>>>> This code should not be mixed in with the LSM infrastructure.
>>>> It should all be contained in its own module, under security/enclave.
>>> lsm_ema is *intended* to be part of the LSM infrastructure.
>> That's not going to fly, not for a minute.
> Why not, if there's a need for it?
>
> And what's the concern here if it becomes part of the LSM infrastructure.

The LSM infrastructure provides a framework for hooks
and allocation of blobs. That's it. It's a layer for
connecting system features like VFS, IPC and the IP stack
to the security modules. It does not implement any policy
of it's own. We are not going to implement SGX or any other
mechanism within the LSM infrastructure.

>>>  It is going to be shared among all LSMs that would like to track
>> enclave pages and their origins.
>>
>> That's true for InfiniBand, tun and sctp as well. Look at their
>> implementations.
> As far as I can tell, InfiniBand, tun and sctp, all of them seemed used inside SELinux only.

So? 

> If you had a chance to look at v1 of my series, I started by burying everything inside SELinux too. But Stephen pointed out such tracking would be needed by all LSMs so code duplication might be a concern. Thus I responded by moving it into LSM infrastructure.

What you need to do is move all the lsm_ema code into its own
place (which could be security/enclave). Manage your internal
data as you like. LSMs (e.g. SELinux) can call your APIs if
needed. If the LSMs need to store SGX information with the file
structure they need to include that in the space they ask for in
the file blob.


>>> And they could be extended to store more information as deemed
>> appropriate by the LSM module.
>>
>> Which is what blobs are for, but that does not appear to be how
>> you're using either the file blob or your new ema blob.
> A lsm_ema_map pointer is stored in file->f_security.

That's up to the individual security module to decide.

>  Each lsm_ema_map contains a list of lsm_ema structures. In my last patch, SELinux stores a ema_security_struct with every ema, by setting selinux_blob_sizes.lbs_ema_data to sizeof(ema_security_struct).

You are managing the ema map lists. You don't need the LSM
infrastructure to do that.

> ema_security_struct is initialized in selinux_enclave_load(), and checked in enclave_mprotect(), which is a subroutine of selinux_file_mprotect(). BTW, it is alloced/freed automatically by LSM infrastructure in security_enclave_load()/security_file_free().

Do you mean security_enclave_load()/security_enclave_free() ?
There is no way you can possibly have sane behavior if you're
allocation and free aren't tied to the same blob.

>>>  The last patch of this series shows how to extend EMA inside SELinux.
>> I don't see (but I admit the code doesn't make a lot of sense to me)
>> anything you couldn't do in the SELinux code by adding data to the
>> file blob. The data you're adding to the LSM infrastructure doesn't
>> belong there, and it doesn't need to be there.
> You are correct. My v1 did it inside SELinux.
>
> The key question I think is whether only SELinux needs it, or all LSMs need it. Stephen thought it was the latter (and I agree with him) so I moved it into the LSM infrastructure to be shared, just like the auditing code.

You are both right that it doesn't belong in the SELinux code.
It also doesn't belong as part of the LSM infrastructure.

>>>> Not acceptable for the LSM infrastructure. They
>>>> are inconsistent with the way data is used there.
>>> I'm not sure I understand this comment.
>> It means that your definition and use of the lsm_ema_blob
>> does not match the way other blobs are managed and used.
>> The LSM infrastructure uses these entries in a very particular
>> way, and you're trying to use it differently. Your might be
>> able to change the rest of the enclave system to use it
>> correctly, or you might be able to find a different place
>> for it.
> I'm still not sure why you think this (lbs_ema_data) is inconsistent with other blobs. 
>
> Same as all other blobs, an LSM requests it by storing the needed size in it, and is assigned an offset, and the buffer is allocated/freed by the infrastructure. Am I missing anything?

Yes. Aside from allocation and deletion the infrastructure does
nothing with the blobs. The blobs are used only by the security
modules. All other data is maintained and used elsewhere. SGX
specific data needs to me maintained and managed elsewhere.

>>> As I stated in the cover letter, the primary question is how to
>> prevent SGX from being abused as a backdoor to make executable pages
>> that would otherwise not be executable without SGX. Any LSM module
>> unaware of that would leave that "hole" open. So tracking enclave pages
>> will become a common task for all LSMs that care page protections, and
>> that's why I place it inside LSM infrastructure.
>>
>> Page protections are an important part of many security features,
>> but that's beside the point. The LSM system provides mechanism for
>> providing additional restrictions to existing security mechanisms.
>> First, you create the security mechanism (e.g. enclaves) then you
>> add LSM hooks so that security modules (e.g. SELinux) can apply
>> their own policies in addition. In support of this, the LSM blob
>> mechanism allows security modules to maintain their own information
>> about the system components (e.g. file, inode, cred, task) they
>> care about. The LSM infrastructure does not itself provide or
>> support security data or policy. That's strictly for the modules
>> to do.
> Agreed!
>
> EMA doesn't dictate policies for sure. Is it considered "security data"? I'm not sure the definition of "security data" here. It does store some "data", something that multiple LSM modules would need to duplicate if not pulled into a common place. It is meant to be a "helper" data structure, just like the auditing code.

Good example. You'll see that there is no audit code in the
LSM infrastructure. None. No audit data, either. It's all taken
care of in the audit system.
Xing, Cedric June 28, 2019, 9:53 p.m. UTC | #7
> From: linux-sgx-owner@vger.kernel.org [mailto:linux-sgx-
> owner@vger.kernel.org] On Behalf Of Stephen Smalley
> Sent: Friday, June 28, 2019 9:37 AM
> 
> > lsm.ema.cache_decisions is on by default and
> > could be turned off by appending “lsm.ema.cache_decisions=0” or
> > “lsm.ema.cache_decisions=off” to the kernel command line.
> 
> This seems problematic on a few fronts:
> 
> - Specifying it as a boot parameter requires teaching admins / policy
> developers to do something in addition to what they are already doing
> when they want to create policy,
> 
> - Limiting it to a boot parameter requires a reboot to change the mode
> of operation, whereas SELinux offers runtime toggling of permissive mode
> and even per-process (domain) permissive mode (and so does AppArmor),

How about making a variable tunable via sysctl?

> 
> - In the cache_decisions=1 case, do we get any auditing at all?  If not,
> that's a problem.  We want auditing not only when we are
> generating/learning policy but also in operation.

Currently it doesn't generate audit log, but I could add it, except it couldn't point to the exact file. But I can use the sigstruct file instead so administrators can at least tell which enclave violates the policy. Do you think it acceptable?

> 
> - There is the potential for inconsistencies to arise between the
> enforcement applied with different cache_decisions values.

The enforcement will be consistent. The difference only lies in the logs.

> 
> I would suggest that we just never cache the decision and accept the
> cost if we are going to take this approach.

This will also be a viable option. I don't think any enclaves would be comprised of a large number of files anyway. When SGX2 comes up, I think most enclaves will be instantiated from only one file and defer loading libraries at runtime. So in practice we are looking to keeping only one file open per enclave, which seems totally acceptable.

Stephen (and everyone having an opinion on this), which way do you prefer? sysctl variable? Or never cache decisions?
Xing, Cedric June 28, 2019, 10:29 p.m. UTC | #8
> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
> Sent: Friday, June 28, 2019 10:22 AM
> >
> > And what's the concern here if it becomes part of the LSM
> infrastructure.
> 
> The LSM infrastructure provides a framework for hooks
> and allocation of blobs. That's it. It's a layer for
> connecting system features like VFS, IPC and the IP stack
> to the security modules. It does not implement any policy
> of it's own. We are not going to implement SGX or any other
> mechanism within the LSM infrastructure.

EMA doesn't force/implement any policy either. It just supplements VMA.

> 
> >>>  It is going to be shared among all LSMs that would like to track
> >> enclave pages and their origins.
> >>
> >> That's true for InfiniBand, tun and sctp as well. Look at their
> >> implementations.
> > As far as I can tell, InfiniBand, tun and sctp, all of them seemed
> used inside SELinux only.
> 
> So?

So they are NOT shared among LSMs, which are different than EMA.

> 
> > If you had a chance to look at v1 of my series, I started by burying
> everything inside SELinux too. But Stephen pointed out such tracking
> would be needed by all LSMs so code duplication might be a concern. Thus
> I responded by moving it into LSM infrastructure.
> 
> What you need to do is move all the lsm_ema code into its own
> place (which could be security/enclave). Manage your internal
> data as you like. LSMs (e.g. SELinux) can call your APIs if
> needed. If the LSMs need to store SGX information with the file
> structure they need to include that in the space they ask for in
> the file blob.

I thought subdirectories were for LSM modules. EMA is more like auditing code, which has a header in include/linux/ and an implementation in security/. Is that right?

> 
> 
> >>> And they could be extended to store more information as deemed
> >> appropriate by the LSM module.
> >>
> >> Which is what blobs are for, but that does not appear to be how
> >> you're using either the file blob or your new ema blob.
> > A lsm_ema_map pointer is stored in file->f_security.
> 
> That's up to the individual security module to decide.

That's doable. The drawback is, if there are N LSM modules active, then the same information will be duplicated N times. Will that be a problem?

> 
> >  Each lsm_ema_map contains a list of lsm_ema structures. In my last
> patch, SELinux stores a ema_security_struct with every ema, by setting
> selinux_blob_sizes.lbs_ema_data to sizeof(ema_security_struct).
> 
> You are managing the ema map lists. You don't need the LSM
> infrastructure to do that.
> 
> > ema_security_struct is initialized in selinux_enclave_load(), and
> checked in enclave_mprotect(), which is a subroutine of
> selinux_file_mprotect(). BTW, it is alloced/freed automatically by LSM
> infrastructure in security_enclave_load()/security_file_free().
> 
> Do you mean security_enclave_load()/security_enclave_free() ?
> There is no way you can possibly have sane behavior if you're
> allocation and free aren't tied to the same blob.

There's no security_*enclave*_free(). lsm_ema_map is allocated only for enclaves. But LSM doesn't know which file is an enclave, so the allocation is deferred until the first security_enclave_load(). security_file_free() frees the map if it isn't NULL.

> 
> >>>  The last patch of this series shows how to extend EMA inside
> SELinux.
> >> I don't see (but I admit the code doesn't make a lot of sense to me)
> >> anything you couldn't do in the SELinux code by adding data to the
> >> file blob. The data you're adding to the LSM infrastructure doesn't
> >> belong there, and it doesn't need to be there.
> > You are correct. My v1 did it inside SELinux.
> >
> > The key question I think is whether only SELinux needs it, or all LSMs
> need it. Stephen thought it was the latter (and I agree with him) so I
> moved it into the LSM infrastructure to be shared, just like the
> auditing code.
> 
> You are both right that it doesn't belong in the SELinux code.
> It also doesn't belong as part of the LSM infrastructure.

Then what is your suggestion? 

Is the code in security_enclave_load()/security_file_free() that bothers you? Because you think they shouldn't do anything more than just a single line of call_int/void_hooks()?

> 
> >>>> Not acceptable for the LSM infrastructure. They
> >>>> are inconsistent with the way data is used there.
> >>> I'm not sure I understand this comment.
> >> It means that your definition and use of the lsm_ema_blob
> >> does not match the way other blobs are managed and used.
> >> The LSM infrastructure uses these entries in a very particular
> >> way, and you're trying to use it differently. Your might be
> >> able to change the rest of the enclave system to use it
> >> correctly, or you might be able to find a different place
> >> for it.
> > I'm still not sure why you think this (lbs_ema_data) is inconsistent
> with other blobs.
> >
> > Same as all other blobs, an LSM requests it by storing the needed size
> in it, and is assigned an offset, and the buffer is allocated/freed by
> the infrastructure. Am I missing anything?
> 
> Yes. Aside from allocation and deletion the infrastructure does
> nothing with the blobs. The blobs are used only by the security
> modules. All other data is maintained and used elsewhere. SGX
> specific data needs to me maintained and managed elsewhere.
> 
> >>> As I stated in the cover letter, the primary question is how to
> >> prevent SGX from being abused as a backdoor to make executable pages
> >> that would otherwise not be executable without SGX. Any LSM module
> >> unaware of that would leave that "hole" open. So tracking enclave
> pages
> >> will become a common task for all LSMs that care page protections,
> and
> >> that's why I place it inside LSM infrastructure.
> >>
> >> Page protections are an important part of many security features,
> >> but that's beside the point. The LSM system provides mechanism for
> >> providing additional restrictions to existing security mechanisms.
> >> First, you create the security mechanism (e.g. enclaves) then you
> >> add LSM hooks so that security modules (e.g. SELinux) can apply
> >> their own policies in addition. In support of this, the LSM blob
> >> mechanism allows security modules to maintain their own information
> >> about the system components (e.g. file, inode, cred, task) they
> >> care about. The LSM infrastructure does not itself provide or
> >> support security data or policy. That's strictly for the modules
> >> to do.
> > Agreed!
> >
> > EMA doesn't dictate policies for sure. Is it considered "security
> data"? I'm not sure the definition of "security data" here. It does
> store some "data", something that multiple LSM modules would need to
> duplicate if not pulled into a common place. It is meant to be a
> "helper" data structure, just like the auditing code.
> 
> Good example. You'll see that there is no audit code in the
> LSM infrastructure. None. No audit data, either. It's all taken
> care of in the audit system. 

Did you mean security/security.c didn't call into any audit functions? lsm_audit.c is located in security/ and its header in include/linux/ but you don't have a problem with them. Am I right?

IIUC, you want me to pack whatever inside security_enclave_load()/security_file_free() into some APIs to be called by individual LSM modules. But if you can pay a bit more attention to the code, an EMA can be inserted to the map only after *all* LSM modules have approved it. So if it is spread into individual LSMs and if there are multiple active LSMs, there could be inconsistence among LSMs if they each maintains its own map and makes different decisions on the same EMA at enclave_load(). I'm not saying that's unresolvable but definitely more error prone, besides wasting memory. Or do you have any practical suggestions?
Stephen Smalley June 29, 2019, 1:22 a.m. UTC | #9
On Fri, Jun 28, 2019 at 5:54 PM Xing, Cedric <cedric.xing@intel.com> wrote:
>
> > From: linux-sgx-owner@vger.kernel.org [mailto:linux-sgx-
> > owner@vger.kernel.org] On Behalf Of Stephen Smalley
> > Sent: Friday, June 28, 2019 9:37 AM
> >
> > > lsm.ema.cache_decisions is on by default and
> > > could be turned off by appending “lsm.ema.cache_decisions=0” or
> > > “lsm.ema.cache_decisions=off” to the kernel command line.
> >
> > This seems problematic on a few fronts:
> >
> > - Specifying it as a boot parameter requires teaching admins / policy
> > developers to do something in addition to what they are already doing
> > when they want to create policy,
> >
> > - Limiting it to a boot parameter requires a reboot to change the mode
> > of operation, whereas SELinux offers runtime toggling of permissive mode
> > and even per-process (domain) permissive mode (and so does AppArmor),
>
> How about making a variable tunable via sysctl?

Better than a boot parameter but still not amenable to per-domain
permissive and still requires admins
to remember and perform an extra step before collecting denials.

>
> >
> > - In the cache_decisions=1 case, do we get any auditing at all?  If not,
> > that's a problem.  We want auditing not only when we are
> > generating/learning policy but also in operation.
>
> Currently it doesn't generate audit log, but I could add it, except it couldn't point to the exact file. But I can use the sigstruct file instead so administrators can at least tell which enclave violates the policy. Do you think it acceptable?

Seems prone to user confusion and lacks precision in why the denial occurred.

>
> >
> > - There is the potential for inconsistencies to arise between the
> > enforcement applied with different cache_decisions values.
>
> The enforcement will be consistent. The difference only lies in the logs.
>
> >
> > I would suggest that we just never cache the decision and accept the
> > cost if we are going to take this approach.
>
> This will also be a viable option. I don't think any enclaves would be comprised of a large number of files anyway. When SGX2 comes up, I think most enclaves will be instantiated from only one file and defer loading libraries at runtime. So in practice we are looking to keeping only one file open per enclave, which seems totally acceptable.
>
> Stephen (and everyone having an opinion on this), which way do you prefer? sysctl variable? Or never cache decisions?

I'd favor never caching decisions.
Stephen Smalley June 29, 2019, 1:37 a.m. UTC | #10
On Fri, Jun 28, 2019 at 1:22 PM Casey Schaufler <casey@schaufler-ca.com> wrote:
>
> On 6/27/2019 5:47 PM, Xing, Cedric wrote:
> >> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
> >> Sent: Thursday, June 27, 2019 4:37 PM
> >>>> This code should not be mixed in with the LSM infrastructure.
> >>>> It should all be contained in its own module, under security/enclave.
> >>> lsm_ema is *intended* to be part of the LSM infrastructure.
> >> That's not going to fly, not for a minute.
> > Why not, if there's a need for it?
> >
> > And what's the concern here if it becomes part of the LSM infrastructure.
>
> The LSM infrastructure provides a framework for hooks
> and allocation of blobs. That's it. It's a layer for
> connecting system features like VFS, IPC and the IP stack
> to the security modules. It does not implement any policy
> of it's own. We are not going to implement SGX or any other
> mechanism within the LSM infrastructure.

I don't think you understand the purpose of this code.  It isn't
implementing SGX, nor is it needed by SGX.
It is providing shared infrastructure for security modules, similar to
lsm_audit.c, so that security modules can enforce W^X or similar
memory protection guarantees for SGX enclave memory, which has unique
properties that render the existing mmap and mprotect hooks
insufficient. They can certainly implement it only for SELinux, but
then any other security module that wants to provide such guarantees
will have to replicate that code.

>
> >>>  It is going to be shared among all LSMs that would like to track
> >> enclave pages and their origins.
> >>
> >> That's true for InfiniBand, tun and sctp as well. Look at their
> >> implementations.
> > As far as I can tell, InfiniBand, tun and sctp, all of them seemed used inside SELinux only.
>
> So?
>
> > If you had a chance to look at v1 of my series, I started by burying everything inside SELinux too. But Stephen pointed out such tracking would be needed by all LSMs so code duplication might be a concern. Thus I responded by moving it into LSM infrastructure.
>
> What you need to do is move all the lsm_ema code into its own
> place (which could be security/enclave). Manage your internal
> data as you like. LSMs (e.g. SELinux) can call your APIs if
> needed. If the LSMs need to store SGX information with the file
> structure they need to include that in the space they ask for in
> the file blob.
>
>
> >>> And they could be extended to store more information as deemed
> >> appropriate by the LSM module.
> >>
> >> Which is what blobs are for, but that does not appear to be how
> >> you're using either the file blob or your new ema blob.
> > A lsm_ema_map pointer is stored in file->f_security.
>
> That's up to the individual security module to decide.
>
> >  Each lsm_ema_map contains a list of lsm_ema structures. In my last patch, SELinux stores a ema_security_struct with every ema, by setting selinux_blob_sizes.lbs_ema_data to sizeof(ema_security_struct).
>
> You are managing the ema map lists. You don't need the LSM
> infrastructure to do that.
>
> > ema_security_struct is initialized in selinux_enclave_load(), and checked in enclave_mprotect(), which is a subroutine of selinux_file_mprotect(). BTW, it is alloced/freed automatically by LSM infrastructure in security_enclave_load()/security_file_free().
>
> Do you mean security_enclave_load()/security_enclave_free() ?
> There is no way you can possibly have sane behavior if you're
> allocation and free aren't tied to the same blob.
>
> >>>  The last patch of this series shows how to extend EMA inside SELinux.
> >> I don't see (but I admit the code doesn't make a lot of sense to me)
> >> anything you couldn't do in the SELinux code by adding data to the
> >> file blob. The data you're adding to the LSM infrastructure doesn't
> >> belong there, and it doesn't need to be there.
> > You are correct. My v1 did it inside SELinux.
> >
> > The key question I think is whether only SELinux needs it, or all LSMs need it. Stephen thought it was the latter (and I agree with him) so I moved it into the LSM infrastructure to be shared, just like the auditing code.
>
> You are both right that it doesn't belong in the SELinux code.
> It also doesn't belong as part of the LSM infrastructure.
>
> >>>> Not acceptable for the LSM infrastructure. They
> >>>> are inconsistent with the way data is used there.
> >>> I'm not sure I understand this comment.
> >> It means that your definition and use of the lsm_ema_blob
> >> does not match the way other blobs are managed and used.
> >> The LSM infrastructure uses these entries in a very particular
> >> way, and you're trying to use it differently. Your might be
> >> able to change the rest of the enclave system to use it
> >> correctly, or you might be able to find a different place
> >> for it.
> > I'm still not sure why you think this (lbs_ema_data) is inconsistent with other blobs.
> >
> > Same as all other blobs, an LSM requests it by storing the needed size in it, and is assigned an offset, and the buffer is allocated/freed by the infrastructure. Am I missing anything?
>
> Yes. Aside from allocation and deletion the infrastructure does
> nothing with the blobs. The blobs are used only by the security
> modules. All other data is maintained and used elsewhere. SGX
> specific data needs to me maintained and managed elsewhere.
>
> >>> As I stated in the cover letter, the primary question is how to
> >> prevent SGX from being abused as a backdoor to make executable pages
> >> that would otherwise not be executable without SGX. Any LSM module
> >> unaware of that would leave that "hole" open. So tracking enclave pages
> >> will become a common task for all LSMs that care page protections, and
> >> that's why I place it inside LSM infrastructure.
> >>
> >> Page protections are an important part of many security features,
> >> but that's beside the point. The LSM system provides mechanism for
> >> providing additional restrictions to existing security mechanisms.
> >> First, you create the security mechanism (e.g. enclaves) then you
> >> add LSM hooks so that security modules (e.g. SELinux) can apply
> >> their own policies in addition. In support of this, the LSM blob
> >> mechanism allows security modules to maintain their own information
> >> about the system components (e.g. file, inode, cred, task) they
> >> care about. The LSM infrastructure does not itself provide or
> >> support security data or policy. That's strictly for the modules
> >> to do.
> > Agreed!
> >
> > EMA doesn't dictate policies for sure. Is it considered "security data"? I'm not sure the definition of "security data" here. It does store some "data", something that multiple LSM modules would need to duplicate if not pulled into a common place. It is meant to be a "helper" data structure, just like the auditing code.
>
> Good example. You'll see that there is no audit code in the
> LSM infrastructure. None. No audit data, either. It's all taken
> care of in the audit system.
>
>
Casey Schaufler June 29, 2019, 9:35 p.m. UTC | #11
On 6/28/2019 6:37 PM, Stephen Smalley wrote:
> On Fri, Jun 28, 2019 at 1:22 PM Casey Schaufler <casey@schaufler-ca.com> wrote:
>> On 6/27/2019 5:47 PM, Xing, Cedric wrote:
>>>> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
>>>> Sent: Thursday, June 27, 2019 4:37 PM
>>>>>> This code should not be mixed in with the LSM infrastructure.
>>>>>> It should all be contained in its own module, under security/enclave.
>>>>> lsm_ema is *intended* to be part of the LSM infrastructure.
>>>> That's not going to fly, not for a minute.
>>> Why not, if there's a need for it?
>>>
>>> And what's the concern here if it becomes part of the LSM infrastructure.
>> The LSM infrastructure provides a framework for hooks
>> and allocation of blobs. That's it. It's a layer for
>> connecting system features like VFS, IPC and the IP stack
>> to the security modules. It does not implement any policy
>> of it's own. We are not going to implement SGX or any other
>> mechanism within the LSM infrastructure.
> I don't think you understand the purpose of this code.  It isn't
> implementing SGX, nor is it needed by SGX.
> It is providing shared infrastructure for security modules, similar to
> lsm_audit.c, so that security modules can enforce W^X or similar
> memory protection guarantees for SGX enclave memory, which has unique
> properties that render the existing mmap and mprotect hooks
> insufficient. They can certainly implement it only for SELinux, but
> then any other security module that wants to provide such guarantees
> will have to replicate that code.

I am not objecting to the purpose of the code.
I *am* objecting to calling it part of the LSM infrastructure.
It needs to be it's own thing, off somewhere else.
It must not use the lsm_ prefix. That's namespace pollution.
The code must not be embedded in the LSM infrastructure code,
that breaks with how everything else works.

... and the notion that you allocate data for one blob
that gets freed relative to another breaks the data management
model.
Andy Lutomirski June 29, 2019, 11:46 p.m. UTC | #12
On Thu, Jun 27, 2019 at 11:56 AM Cedric Xing <cedric.xing@intel.com> wrote:
>
> SGX enclaves are loaded from pages in regular memory. Given the ability to
> create executable pages, the newly added SGX subsystem may present a backdoor
> for adversaries to circumvent LSM policies, such as creating an executable
> enclave page from a modified regular page that would otherwise not be made
> executable as prohibited by LSM. Therefore arises the primary question of
> whether an enclave page should be allowed to be created from a given source
> page in regular memory.
>
> A related question is whether to grant/deny a mprotect() request on a given
> enclave page/range. mprotect() is traditionally covered by
> security_file_mprotect() hook, however, enclave pages have a different lifespan
> than either MAP_PRIVATE or MAP_SHARED. Particularly, MAP_PRIVATE pages have the
> same lifespan as the VMA while MAP_SHARED pages have the same lifespan as the
> backing file (on disk), but enclave pages have the lifespan of the enclave’s
> file descriptor. For example, enclave pages could be munmap()’ed then mmap()’ed
> again without losing contents (like MAP_SHARED), but all enclave pages will be
> lost once its file descriptor has been closed (like MAP_PRIVATE). That said,
> LSM modules need some new data structure for tracking protections of enclave
> pages/ranges so that they can make proper decisions at mmap()/mprotect()
> syscalls.
>
> The last question, which is orthogonal to the 2 above, is whether or not to
> allow a given enclave to launch/run. Enclave pages are not visible to the rest
> of the system, so to some extent offer a better place for malicious software to
> hide. Thus, it is sometimes desirable to whitelist/blacklist enclaves by their
> measurements, signing public keys, or image files.
>
> To address the questions above, 2 new LSM hooks are added for enclaves.
>   - security_enclave_load() – This hook allows LSM to decide whether or not to
>     allow instantiation of a range of enclave pages using the specified VMA. It
>     is invoked when a range of enclave pages is about to be loaded. It serves 3
>     purposes: 1) indicate to LSM that the file struct in subject is an enclave;
>     2) allow LSM to decide whether or not to instantiate those pages and 3)
>     allow LSM to initialize internal data structures for tracking
>     origins/protections of those pages.
>   - security_enclave_init() – This hook allows whitelisting/blacklisting or
>     performing whatever checks deemed appropriate before an enclave is allowed
>     to run. An LSM module may opt to use the file backing the SIGSTRUCT as a
>     proxy to dictate allowed protections for anonymous pages.
>
> mprotect() of enclave pages continue to be governed by
> security_file_mprotect(), with the expectation that LSM is able to distinguish
> between regular and enclave pages inside the hook. For mmap(), the SGX
> subsystem is expected to invoke security_file_mprotect() explicitly to check
> protections against the requested protections for existing enclave pages. As
> stated earlier, enclave pages have different lifespan than the existing
> MAP_PRIVATE and MAP_SHARED pages, so would require a new data structure outside
> of VMA to track their protections and/or origins. Enclave Memory Area (or EMA
> for short) has been introduced to address the need. EMAs are maintained by the
> LSM framework for all LSM modules to share. EMAs will be instantiated for
> enclaves only so will not impose memory/performance overheads for regular
> applications/files. Please see include/linux/lsm_ema.h and security/lsm_ema.c
> for details.
>
> A new setup parameter – lsm.ema.cache_decisions has been introduced to offer
> the choice between memory consumption and accuracy of audit logs. Enabling
> lsm.ema.cache_decisions causes LSM framework NOT to keep backing files open for
> EMAs. While that saves memory, it requires LSM modules to make and cache
> decisions ahead of time, and makes it difficult for LSM modules to generate
> accurate audit logs. System administrators are expected to run LSM in
> permissive mode with lsm.ema.cache_decisions off to determine the minimal
> permissions needed, and then turn it back on in enforcing mode for optimal
> performance and memory usage. lsm.ema.cache_decisions is on by default and
> could be turned off by appending “lsm.ema.cache_decisions=0” or
> “lsm.ema.cache_decisions=off” to the kernel command line.

Just on a very cursory review, this seems like it's creating a bunch
of complexity (a whole new library and data structure), and I'm not
convinced the result is any better than Sean's version.
Xing, Cedric July 1, 2019, 5:11 p.m. UTC | #13
Hi Andy,

> From: Andy Lutomirski [mailto:luto@kernel.org]
> Sent: Saturday, June 29, 2019 4:47 PM
> 
> Just on a very cursory review, this seems like it's creating a bunch of
> complexity (a whole new library and data structure), and I'm not
> convinced the result is any better than Sean's version.

The new EMA data structure is to track enclave pages by range. Yes, Sean avoided that by storing similar information in the existing encl_page structure inside SGX subsystem. But as I pointed out, his code has to iterate through *every* page in range so mprotect() will be very slow if the range is large. So he would end up introducing something similar to achieve the same performance. 

And that's not the most important point. The major problem in his patch lies in SGX2 support, as #PF driven EAUG cannot be supported (or he'd have to amend his code accordingly, which will add complexity and tip your scale). 

Other weird things, such as mmap()'ing page by page vs. mmap()'ing the whole range will impact subsequent mprotect()'s as you have noticed, don't exist in my series.
Xing, Cedric July 1, 2019, 5:57 p.m. UTC | #14
> From: linux-sgx-owner@vger.kernel.org [mailto:linux-sgx-
> owner@vger.kernel.org] On Behalf Of Casey Schaufler
> 
> On 6/28/2019 6:37 PM, Stephen Smalley wrote:
> > On Fri, Jun 28, 2019 at 1:22 PM Casey Schaufler <casey@schaufler-
> ca.com> wrote:
> >> On 6/27/2019 5:47 PM, Xing, Cedric wrote:
> >>>> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
> >>>> Sent: Thursday, June 27, 2019 4:37 PM
> >>>>>> This code should not be mixed in with the LSM infrastructure.
> >>>>>> It should all be contained in its own module, under
> security/enclave.
> >>>>> lsm_ema is *intended* to be part of the LSM infrastructure.
> >>>> That's not going to fly, not for a minute.
> >>> Why not, if there's a need for it?
> >>>
> >>> And what's the concern here if it becomes part of the LSM
> infrastructure.
> >> The LSM infrastructure provides a framework for hooks and allocation
> >> of blobs. That's it. It's a layer for connecting system features like
> >> VFS, IPC and the IP stack to the security modules. It does not
> >> implement any policy of it's own. We are not going to implement SGX
> >> or any other mechanism within the LSM infrastructure.
> > I don't think you understand the purpose of this code.  It isn't
> > implementing SGX, nor is it needed by SGX.
> > It is providing shared infrastructure for security modules, similar to
> > lsm_audit.c, so that security modules can enforce W^X or similar
> > memory protection guarantees for SGX enclave memory, which has unique
> > properties that render the existing mmap and mprotect hooks
> > insufficient. They can certainly implement it only for SELinux, but
> > then any other security module that wants to provide such guarantees
> > will have to replicate that code.
> 
> I am not objecting to the purpose of the code.
> I *am* objecting to calling it part of the LSM infrastructure.
> It needs to be it's own thing, off somewhere else.
> It must not use the lsm_ prefix. That's namespace pollution.
> The code must not be embedded in the LSM infrastructure code, that
> breaks with how everything else works.

If you understand the purpose, then why are you objecting the lsm_ prefix as they are APIs to be used by all LSM modules? Or what should be the prefix in your mind? Or what kind of APIs do you think can qualify the lsm_ prefix?

And I'd like to clarify that it doesn't break anything, but is just a bit different, in that security_enclave_load() and security_file_free() call into those APIs. But there's a need for them because otherwise code/data would have to be duplicated among LSMs and the logic would be harder to comprehend. So that's a trade-off. Then what's the practical drawback of doing that? If no, why would we want to pay for the cost for not doing that?

> 
> ... and the notion that you allocate data for one blob that gets freed
> relative to another breaks the data management model.

What do you mean here?

EMA blobs are allocated/freed *not* relative to any other blobs.
Andy Lutomirski July 1, 2019, 5:58 p.m. UTC | #15
On Mon, Jul 1, 2019 at 10:11 AM Xing, Cedric <cedric.xing@intel.com> wrote:
>
> Hi Andy,
>
> > From: Andy Lutomirski [mailto:luto@kernel.org]
> > Sent: Saturday, June 29, 2019 4:47 PM
> >
> > Just on a very cursory review, this seems like it's creating a bunch of
> > complexity (a whole new library and data structure), and I'm not
> > convinced the result is any better than Sean's version.
>
> The new EMA data structure is to track enclave pages by range. Yes, Sean avoided that by storing similar information in the existing encl_page structure inside SGX subsystem. But as I pointed out, his code has to iterate through *every* page in range so mprotect() will be very slow if the range is large. So he would end up introducing something similar to achieve the same performance.

It seems odd to stick it in security/ if it only has one user, though.
Also, if it wasn't in security/, then the security folks would stop
complaining :)


>
> And that's not the most important point. The major problem in his patch lies in SGX2 support, as #PF driven EAUG cannot be supported (or he'd have to amend his code accordingly, which will add complexity and tip your scale).
>

Why can't it be?
Xing, Cedric July 1, 2019, 6:02 p.m. UTC | #16
> From: Stephen Smalley [mailto:stephen.smalley@gmail.com]
> Sent: Friday, June 28, 2019 6:22 PM
> 
> On Fri, Jun 28, 2019 at 5:54 PM Xing, Cedric <cedric.xing@intel.com>
> wrote:
> >
> > > From: linux-sgx-owner@vger.kernel.org [mailto:linux-sgx-
> > > owner@vger.kernel.org] On Behalf Of Stephen Smalley
> > > Sent: Friday, June 28, 2019 9:37 AM
> > >
> > > > lsm.ema.cache_decisions is on by default and could be turned off
> > > > by appending “lsm.ema.cache_decisions=0” or
> > > > “lsm.ema.cache_decisions=off” to the kernel command line.
> > >
> > > This seems problematic on a few fronts:
> > >
> > > - Specifying it as a boot parameter requires teaching admins /
> > > policy developers to do something in addition to what they are
> > > already doing when they want to create policy,
> > >
> > > - Limiting it to a boot parameter requires a reboot to change the
> > > mode of operation, whereas SELinux offers runtime toggling of
> > > permissive mode and even per-process (domain) permissive mode (and
> > > so does AppArmor),
> >
> > How about making a variable tunable via sysctl?
> 
> Better than a boot parameter but still not amenable to per-domain
> permissive and still requires admins to remember and perform an extra
> step before collecting denials.
> 
> >
> > >
> > > - In the cache_decisions=1 case, do we get any auditing at all?  If
> > > not, that's a problem.  We want auditing not only when we are
> > > generating/learning policy but also in operation.
> >
> > Currently it doesn't generate audit log, but I could add it, except it
> couldn't point to the exact file. But I can use the sigstruct file
> instead so administrators can at least tell which enclave violates the
> policy. Do you think it acceptable?
> 
> Seems prone to user confusion and lacks precision in why the denial
> occurred.
> 
> >
> > >
> > > - There is the potential for inconsistencies to arise between the
> > > enforcement applied with different cache_decisions values.
> >
> > The enforcement will be consistent. The difference only lies in the
> logs.
> >
> > >
> > > I would suggest that we just never cache the decision and accept the
> > > cost if we are going to take this approach.
> >
> > This will also be a viable option. I don't think any enclaves would be
> comprised of a large number of files anyway. When SGX2 comes up, I think
> most enclaves will be instantiated from only one file and defer loading
> libraries at runtime. So in practice we are looking to keeping only one
> file open per enclave, which seems totally acceptable.
> >
> > Stephen (and everyone having an opinion on this), which way do you
> prefer? sysctl variable? Or never cache decisions?
> 
> I'd favor never caching decisions.

Alright, I'll remove the boot parameter and never cache decisions.
Xing, Cedric July 1, 2019, 6:31 p.m. UTC | #17
> From: Andy Lutomirski [mailto:luto@kernel.org]
> Sent: Monday, July 01, 2019 10:58 AM
> 
> On Mon, Jul 1, 2019 at 10:11 AM Xing, Cedric <cedric.xing@intel.com>
> wrote:
> >
> > Hi Andy,
> >
> > > From: Andy Lutomirski [mailto:luto@kernel.org]
> > > Sent: Saturday, June 29, 2019 4:47 PM
> > >
> > > Just on a very cursory review, this seems like it's creating a bunch
> > > of complexity (a whole new library and data structure), and I'm not
> > > convinced the result is any better than Sean's version.
> >
> > The new EMA data structure is to track enclave pages by range. Yes,
> Sean avoided that by storing similar information in the existing
> encl_page structure inside SGX subsystem. But as I pointed out, his code
> has to iterate through *every* page in range so mprotect() will be very
> slow if the range is large. So he would end up introducing something
> similar to achieve the same performance.
> 
> It seems odd to stick it in security/ if it only has one user, though.
> Also, if it wasn't in security/, then the security folks would stop
> complaining :)

That's where I started. EMA (though named differently in my v1) was buried inside and used only by SELinux. But Stephen thought it useful for other LSMs as well, as it could be expected that other LSMs would also need to track enclave pages and end up duplicating what's done inside SELinux. 

I'm ok either way, though I do agree with Stephen's assessment.

> 
> 
> >
> > And that's not the most important point. The major problem in his
> patch lies in SGX2 support, as #PF driven EAUG cannot be supported (or
> he'd have to amend his code accordingly, which will add complexity and
> tip your scale).
> >
> 
> Why can't it be?

Let me take it back. It's important as it is where LSM folks are divided. 

I intended to say the major reason I objected Sean's approach was its inability to support SGX2 smoothly - as #PF driven EAUG requires non-existent pages to be mmap()'ed, otherwise vm_ops->fault wouldn't be dispatched so EAUG couldn't be issued in response to #PF.
Andy Lutomirski July 1, 2019, 7:36 p.m. UTC | #18
On Mon, Jul 1, 2019 at 11:31 AM Xing, Cedric <cedric.xing@intel.com> wrote:
> I intended to say the major reason I objected Sean's approach was its inability to support SGX2 smoothly - as #PF driven EAUG requires non-existent pages to be mmap()'ed, otherwise vm_ops->fault wouldn't be dispatched so EAUG couldn't be issued in response to #PF.

I still think that, if the kernel wants to support #PF-driven EAUG, it
should be an opt-in thing.  It would be something like
SGX_IOC_ADD_LAZY_EAUG_PAGES or similar.  If it's done that way, then
the driver needs to learn how to track ranges of pages efficiently,
which is another reason to consider leaving all the fancy page / page
range tracking in the driver.

I don't think it's a good idea for a page fault on any non-EADDed page
in ELRANGE to automatically populate the page.
Casey Schaufler July 1, 2019, 7:53 p.m. UTC | #19
On 7/1/2019 10:57 AM, Xing, Cedric wrote:
>> From: linux-sgx-owner@vger.kernel.org [mailto:linux-sgx-
>> owner@vger.kernel.org] On Behalf Of Casey Schaufler
>>
>> On 6/28/2019 6:37 PM, Stephen Smalley wrote:
>>> On Fri, Jun 28, 2019 at 1:22 PM Casey Schaufler <casey@schaufler-
>> ca.com> wrote:
>>>> On 6/27/2019 5:47 PM, Xing, Cedric wrote:
>>>>>> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
>>>>>> Sent: Thursday, June 27, 2019 4:37 PM
>>>>>>>> This code should not be mixed in with the LSM infrastructure.
>>>>>>>> It should all be contained in its own module, under
>> security/enclave.
>>>>>>> lsm_ema is *intended* to be part of the LSM infrastructure.
>>>>>> That's not going to fly, not for a minute.
>>>>> Why not, if there's a need for it?
>>>>>
>>>>> And what's the concern here if it becomes part of the LSM
>> infrastructure.
>>>> The LSM infrastructure provides a framework for hooks and allocation
>>>> of blobs. That's it. It's a layer for connecting system features like
>>>> VFS, IPC and the IP stack to the security modules. It does not
>>>> implement any policy of it's own. We are not going to implement SGX
>>>> or any other mechanism within the LSM infrastructure.
>>> I don't think you understand the purpose of this code.  It isn't
>>> implementing SGX, nor is it needed by SGX.
>>> It is providing shared infrastructure for security modules, similar to
>>> lsm_audit.c, so that security modules can enforce W^X or similar
>>> memory protection guarantees for SGX enclave memory, which has unique
>>> properties that render the existing mmap and mprotect hooks
>>> insufficient. They can certainly implement it only for SELinux, but
>>> then any other security module that wants to provide such guarantees
>>> will have to replicate that code.
>> I am not objecting to the purpose of the code.
>> I *am* objecting to calling it part of the LSM infrastructure.
>> It needs to be it's own thing, off somewhere else.
>> It must not use the lsm_ prefix. That's namespace pollution.
>> The code must not be embedded in the LSM infrastructure code, that
>> breaks with how everything else works.
> If you understand the purpose,

The purpose is to support the SGX hardware, is it not?
If you don't have SGX hardware (e.g. MIPS, ARM, s390) you
don't need this code.

> then why are you objecting the lsm_ prefix as they are APIs to be used by all LSM modules?

We name interfaces based on what they provide, not who consumes them.
As your code provides enclave services, that is how they should be named.

>  Or what should be the prefix in your mind?

I'm pretty sure that I've consistently suggested "enclave". 

> Or what kind of APIs do you think can qualify the lsm_ prefix?

Code that implements the LSM infrastructure. There is one LSM
blob allocation interface, lsm_inode_alloc(), that is used in
early set-up that is exported. As I've mentioned more than once,
enclave/page management is not an LSM infrastructure function,
it's a memory management function.

> And I'd like to clarify that it doesn't break anything, but is just a bit different, in that security_enclave_load() and security_file_free() call into those APIs.

There should be nothing in security_enclave_load() except calls to the enclave_load()
hooks (e.g. selinux_enclave_load()). There should be nothing in security_file_free()
except file blob management calls to the file_free() hooks (e.g. apparmor_file_free()). 

> But there's a need for them because otherwise code/data would have to be duplicated among LSMs

There's plenty of code duplication among the LSMs, because a lot
of what they do is the same thing. Someday there may be an effort
to address some of that, but I don't think it's on anybody's radar.
As for data duplication, there's a reason we use lots of pointers.

> and the logic would be harder to comprehend.

Keeping the layering clean is critical to comprehension.
There's a lot of VFS code that could have been implemented
within the LSM infrastructure, but I don't think that anyone
would argue that it should have been.

> So that's a trade-off.

I remain completely unconvinced that your proposal
represents a good way to implement you scheme.

> Then what's the practical drawback of doing that?

Name space pollution.
Layering violation.
Architecture specific implementation detail in a
general infrastructure.

> If no, why would we want to pay for the cost for not doing that?

Modularity and maintainability come directly to mind.

>> ... and the notion that you allocate data for one blob that gets freed
>> relative to another breaks the data management model.
> What do you mean here?

You're freeing the EMA data from security_file_free().
If selinux wants to free EMA data it has allocated in
selinux_enclave_load() in selinux_file_free() that's fine,
but the LSM infrastructure has no need to know about it.
EMA needs to manage its own data, just like VFS does.
The LSM infrastructure provides blob management so that
the security modules can extend data if they want to.

> EMA blobs are allocated/freed *not* relative to any other blobs.

In the code you proposed they are freed in security_file_free().
That is for file blob management.
Xing, Cedric July 1, 2019, 7:56 p.m. UTC | #20
> From: Andy Lutomirski [mailto:luto@kernel.org]
> Sent: Monday, July 01, 2019 12:36 PM
> 
> On Mon, Jul 1, 2019 at 11:31 AM Xing, Cedric <cedric.xing@intel.com>
> wrote:
> > I intended to say the major reason I objected Sean's approach was its
> inability to support SGX2 smoothly - as #PF driven EAUG requires non-
> existent pages to be mmap()'ed, otherwise vm_ops->fault wouldn't be
> dispatched so EAUG couldn't be issued in response to #PF.
> 
> I still think that, if the kernel wants to support #PF-driven EAUG, it
> should be an opt-in thing.  It would be something like
> SGX_IOC_ADD_LAZY_EAUG_PAGES or similar.  If it's done that way, then
> the driver needs to learn how to track ranges of pages efficiently,
> which is another reason to consider leaving all the fancy page / page
> range tracking in the driver.
> 
> I don't think it's a good idea for a page fault on any non-EADDed page
> in ELRANGE to automatically populate the page.

I'm with you. The user code shall be explicit on which range to EAUG pages upon #PF. What I'm saying is, a range has to be mapped before the driver could receive #PFs (in the form of vm_ops->fault callbacks). But Sean's series doesn’t support that because no pages can be mapped before coming into existence.

For "page tracking", what information to track is LSM dependent, so it may run into problems if different LSMs want to track different things. And that's the major reason I think it should be done inside LSM. 

Besides, the current page tracking structure in the driver is page oriented and its sole purpose is to serve #PFs. Page protection is better tracked using range oriented structures. Those 2 are orthogonal. It wouldn't reduce the complexity of the whole kernel by moving it into SGX driver.
Xing, Cedric July 1, 2019, 9:45 p.m. UTC | #21
> From: linux-sgx-owner@vger.kernel.org [mailto:linux-sgx-
> owner@vger.kernel.org] On Behalf Of Casey Schaufler
> Sent: Monday, July 01, 2019 12:54 PM
> > If you understand the purpose,
> 
> The purpose is to support the SGX hardware, is it not?
> If you don't have SGX hardware (e.g. MIPS, ARM, s390) you don't need
> this code.

No, it is NOT to support SGX - i.e. SGX doesn't require this piece of code to work.

And as Dr. Greg pointed out, it can be used for other TEEs than SGX. It doesn't contain SGX h/w specifics. It is compiled out because there's no module calling it on other architectures today. But it doesn't conflict with any h/w and may be useful (for other TEEs) on other architectures in future.

> 
> > then why are you objecting the lsm_ prefix as they are APIs to be used
> by all LSM modules?
> 
> We name interfaces based on what they provide, not who consumes them.
> As your code provides enclave services, that is how they should be named.
> 
> >  Or what should be the prefix in your mind?
> 
> I'm pretty sure that I've consistently suggested "enclave".
> 
> > Or what kind of APIs do you think can qualify the lsm_ prefix?
> 
> Code that implements the LSM infrastructure. There is one LSM blob
> allocation interface, lsm_inode_alloc(), that is used in early set-up
> that is exported. As I've mentioned more than once, enclave/page
> management is not an LSM infrastructure function, it's a memory
> management function.

It doesn't manage anything. The reason it appears in the infrastructure is because the decision of inserting an EMA depends on the decisions from *all* active LSMs. That is NOT new either, as you can see it in security_file_permission() and security_vm_enough_memory_mm(), both do something after all LSM modules make their decisions.

Would you please explain why you don't see those as problems but calling EMA functions in security_enclave_load() is a problem?

> 
> > And I'd like to clarify that it doesn't break anything, but is just a
> bit different, in that security_enclave_load() and security_file_free()
> call into those APIs.
> 
> There should be nothing in security_enclave_load() except calls to the
> enclave_load() hooks (e.g. selinux_enclave_load()). There should be
> nothing in security_file_free() except file blob management calls to the
> file_free() hooks (e.g. apparmor_file_free()).

As above, there are examples in security/security.c where the hook does more than just calling registered hooks from LSMs.

> 
> > But there's a need for them because otherwise code/data would have to
> > be duplicated among LSMs
> 
> There's plenty of code duplication among the LSMs, because a lot of what
> they do is the same thing. Someday there may be an effort to address
> some of that, but I don't think it's on anybody's radar.
> As for data duplication, there's a reason we use lots of pointers.

As stated above, security_enclave_load() needs to do something extra after all LSMs make their decisions. How can pointers help here?
 
> 
> > and the logic would be harder to comprehend.
> 
> Keeping the layering clean is critical to comprehension.
> There's a lot of VFS code that could have been implemented within the
> LSM infrastructure, but I don't think that anyone would argue that it
> should have been.
> 
> > So that's a trade-off.
> 
> I remain completely unconvinced that your proposal represents a good way
> to implement you scheme.
> 
> > Then what's the practical drawback of doing that?
> 
> Name space pollution.

Alright, I can fix the names.

> Layering violation.

Not sure what you are referring to. 

If you are referring to buffers allocated in one layer and freed in elsewhere, you have got the code wrong. Buffers allocated in security_enclave_load() is freed in security_file_free(). Whatever else allocated in LSMs are not seen or taken care of by the infrastructure. The purpose of allocating EMAs in enclave_load() is trying to minimize overhead for non-enclave files, otherwise it could be done in file_alloc() to be more "paired" with file_free(). But I don't see it necessary. 

> Architecture specific implementation detail in a general infrastructure.

Stated earlier, it doesn't contain any h/w specifics but just a TEE abstraction. It could be left on all the time or controlled by a different config macro. It is contingent to CONFIG_INTEL_SGX just for convenience, as SGX is the first (and only so far) TEE that needs attention from LSM, but there could be more in future. 

> 
> > If no, why would we want to pay for the cost for not doing that?
> 
> Modularity and maintainability come directly to mind.

Putting it elsewhere will incur more maintenance cost.

> 
> >> ... and the notion that you allocate data for one blob that gets
> >> freed relative to another breaks the data management model.
> > What do you mean here?
> 
> You're freeing the EMA data from security_file_free().
> If selinux wants to free EMA data it has allocated in
> selinux_enclave_load() in selinux_file_free() that's fine, but the LSM
> infrastructure has no need to know about it.
> EMA needs to manage its own data, just like VFS does.
> The LSM infrastructure provides blob management so that the security
> modules can extend data if they want to.

You've got the code wrong. selinux_enclave_load() doesn't allocate any memory.  selinux_file_mprotect() may, due to EMA split. But that's transparent to all LSMs.

The LSM infrastructure doesn't know anything about what LSM modules do, nor does it manage any buffers allocated by any LSM modules.

EMA is currently managing its own data. What's needed is the trigger - to let EMA know when to update its states. The trigger could be placed in LSM infrastructure or inside individual LSMs. The reason to put it in the infrastructure, is that it depends on the decision of *all* LSMs whether to insert a new EMA. That's similar to vm_enough_memory() where the final __vm_enough_memory() call is made by the infrastructure but not individual LSMs.

> 
> > EMA blobs are allocated/freed *not* relative to any other blobs.
> 
> In the code you proposed they are freed in security_file_free().
> That is for file blob management.

Yes. EMA contributes to the file blob. But it only frees memory allocated by the infrastructure itself, not anything from any LSM modules.
Casey Schaufler July 1, 2019, 11:11 p.m. UTC | #22
On 7/1/2019 2:45 PM, Xing, Cedric wrote:
>> From: linux-sgx-owner@vger.kernel.org [mailto:linux-sgx-
>> owner@vger.kernel.org] On Behalf Of Casey Schaufler
>> Sent: Monday, July 01, 2019 12:54 PM
>>> If you understand the purpose,
>> The purpose is to support the SGX hardware, is it not?
>> If you don't have SGX hardware (e.g. MIPS, ARM, s390) you don't need
>> this code.
> No, it is NOT to support SGX

Then what *is* it for?

>  - i.e. SGX doesn't require this piece of code to work.
>
> And as Dr. Greg pointed out, it can be used for other TEEs than SGX.

That sure makes it sound like it's for SGX to me.

>  It doesn't contain SGX h/w specifics.

I never said it did. But no one ever suggested doing anything
here before SGX, and your subject line:

	"x86/sgx: Add SGX specific LSM hooks"

says it does.

> It is compiled out because there's no module calling it on other architectures today. But it doesn't conflict with any h/w and may be useful (for other TEEs) on other architectures in future.
>
>>> then why are you objecting the lsm_ prefix as they are APIs to be used
>> by all LSM modules?
>>
>> We name interfaces based on what they provide, not who consumes them.
>> As your code provides enclave services, that is how they should be named.
>>
>>>  Or what should be the prefix in your mind?
>> I'm pretty sure that I've consistently suggested "enclave".
>>
>>> Or what kind of APIs do you think can qualify the lsm_ prefix?
>> Code that implements the LSM infrastructure. There is one LSM blob
>> allocation interface, lsm_inode_alloc(), that is used in early set-up
>> that is exported. As I've mentioned more than once, enclave/page
>> management is not an LSM infrastructure function, it's a memory
>> management function.
> It doesn't manage anything.

Sorry, "memory management" as in all that stuff around pages and
TLBs and who gets what pages, as opposed to keeping track of anything
on its own.

> The reason it appears in the infrastructure is because the decision of inserting an EMA depends on the decisions from *all* active LSMs.

You have not been listening. Most LSMs use VFS. We haven't rolled VFS
functions into the LSM infrastructure.

> That is NOT new either, as you can see it in security_file_permission() and security_vm_enough_memory_mm(), both do something after all LSM modules make their decisions.

Did you look to see what it is they're doing? If you had,
you would see that is nothing like what you're proposing.


> Would you please explain why you don't see those as problems but calling EMA functions in security_enclave_load() is a problem?

The enclave code should be calling security_enclave_load(),
not the other way around. That assumes you're using the naming
convention correctly.

security_vm_enough_memory_mm() was discussed at length and there
wasn't a clean way to get the logic right without putting the code
here. security_file_permission() has the fs_notify_perm call for
similar reasons. Neither is anything like what you're suggesting.


>>> And I'd like to clarify that it doesn't break anything, but is just a
>> bit different, in that security_enclave_load() and security_file_free()
>> call into those APIs.
>>
>> There should be nothing in security_enclave_load() except calls to the
>> enclave_load() hooks (e.g. selinux_enclave_load()). There should be
>> nothing in security_file_free() except file blob management calls to the
>> file_free() hooks (e.g. apparmor_file_free()).
> As above, there are examples in security/security.c where the hook does more than just calling registered hooks from LSMs.

And as I've said, that doesn't matter. You're still going about
using the LSM infrastructure backwards.

>>> But there's a need for them because otherwise code/data would have to
>>> be duplicated among LSMs
>> There's plenty of code duplication among the LSMs, because a lot of what
>> they do is the same thing. Someday there may be an effort to address
>> some of that, but I don't think it's on anybody's radar.
>> As for data duplication, there's a reason we use lots of pointers.
> As stated above, security_enclave_load() needs to do something extra after all LSMs make their decisions. How can pointers help here?

I can explain it, but you clearly have no interest in doing
anything to make your code fit into the system. I have a lot
of other things to be doing.

>>> and the logic would be harder to comprehend.
>> Keeping the layering clean is critical to comprehension.
>> There's a lot of VFS code that could have been implemented within the
>> LSM infrastructure, but I don't think that anyone would argue that it
>> should have been.
>>
>>> So that's a trade-off.
>> I remain completely unconvinced that your proposal represents a good way
>> to implement you scheme.
>>
>>> Then what's the practical drawback of doing that?
>> Name space pollution.
> Alright, I can fix the names.

Good!


>> Layering violation.
> Not sure what you are referring to.

The only places where the blob freed by security_file_free()
may be allocated is security_file_alloc(). The security modules
are welcome to do anything they like in addition, provided
they clean up after themselves in their file_free() hooks.

If SELinux wants to support controls on enclave information,
and that requires additional data, SELinux should include
space in its file blob for that information, or a pointer to
the place where the enclave code is maintaining it.

That's the way audit works.

> If you are referring to buffers allocated in one layer and freed in elsewhere, you have got the code wrong. Buffers allocated in security_enclave_load() is freed in security_file_free().

It's up to the security module's file_free() to clean up anything that
wasn't allocated in security_file_free(). Interested security modules
should call enclave_load(), and put the information into their portion
of the security blob. The module specific code can call enclave_file_free(),
or whatever interface you want to provide, to clean up. That might take
place in file_free(), but it also might be elsewhere.


> Whatever else allocated in LSMs are not seen or taken care of by the infrastructure. The purpose of allocating EMAs in enclave_load() is trying to minimize overhead for non-enclave files, otherwise it could be done in file_alloc() to be more "paired" with file_free(). But I don't see it necessary.

Try looking at maintaining what you've put into the LSM code as
a separate entity. It makes it simpler. Really.

>> Architecture specific implementation detail in a general infrastructure.
> Stated earlier, it doesn't contain any h/w specifics but just a TEE abstraction.

Then put it in the TEE system.

> It could be left on all the time or controlled by a different config macro.

True in any case.

> It is contingent to CONFIG_INTEL_SGX just for convenience, as SGX is the first (and only so far) TEE that needs attention from LSM, but there could be more in future.

All the more reason to keep it separate. These things never get simpler
when they get more generalized.

>>> If no, why would we want to pay for the cost for not doing that?
>> Modularity and maintainability come directly to mind.
> Putting it elsewhere will incur more maintenance cost.

I don't believe that for a second. 40 years of C programming
have taught me that trying to do multiple things in one place
is always a bad idea.


>>>> ... and the notion that you allocate data for one blob that gets
>>>> freed relative to another breaks the data management model.
>>> What do you mean here?
>> You're freeing the EMA data from security_file_free().
>> If selinux wants to free EMA data it has allocated in
>> selinux_enclave_load() in selinux_file_free() that's fine, but the LSM
>> infrastructure has no need to know about it.
>> EMA needs to manage its own data, just like VFS does.
>> The LSM infrastructure provides blob management so that the security
>> modules can extend data if they want to.
> You've got the code wrong. selinux_enclave_load() doesn't allocate any memory.  selinux_file_mprotect() may, due to EMA split. But that's transparent to all LSMs.

... and the LSM infrastructure doesn't care and
must not be made to care. It's all up to SELinux.

> The LSM infrastructure doesn't know anything about what LSM modules do, nor does it manage any buffers allocated by any LSM modules.

Right, which is why putting your lsm_ema_blob is wrong, and why
forcing into the file blob is wrong.

> EMA is currently managing its own data. What's needed is the trigger - to let EMA know when to update its states. The trigger could be placed in LSM infrastructure or inside individual LSMs.

Yes. The latter.

> The reason to put it in the infrastructure, is that it depends on the decision of *all* LSMs whether to insert a new EMA.

That's basic stacking behavior. "Bail on fail", which says that once
denial is detected, you're done.

> That's similar to vm_enough_memory() where the final __vm_enough_memory() call is made by the infrastructure but not individual LSMs.

Do you really understand the painful reasons that case is required?
And if so, why you aren't taking steps to avoid them?


>>> EMA blobs are allocated/freed *not* relative to any other blobs.
>> In the code you proposed they are freed in security_file_free().
>> That is for file blob management.
> Yes. EMA contributes to the file blob. But it only frees memory allocated by the infrastructure itself, not anything from any LSM modules.

That's not the way it's supposed to be done. The module tells
the infrastructure what it needs, which may include space for
EMA data. The module asks EMA for the data it needs and stuffs
it somewhere, and the file blob is a fine choice. The module
cleans up in file_free, or at any time before that. If no module
uses EMA, nothing goes in the blob. If two modules use EMA each
is responsible for the data it uses, which may be the same or
may be different.

I've looked at your code. Making it work the way it should would
not be difficult and would likely simplify a bunch of it.
Andy Lutomirski July 2, 2019, 2:29 a.m. UTC | #23
On Mon, Jul 1, 2019 at 12:56 PM Xing, Cedric <cedric.xing@intel.com> wrote:
>
> > From: Andy Lutomirski [mailto:luto@kernel.org]
> > Sent: Monday, July 01, 2019 12:36 PM
> >
> > On Mon, Jul 1, 2019 at 11:31 AM Xing, Cedric <cedric.xing@intel.com>
> > wrote:
> > > I intended to say the major reason I objected Sean's approach was its
> > inability to support SGX2 smoothly - as #PF driven EAUG requires non-
> > existent pages to be mmap()'ed, otherwise vm_ops->fault wouldn't be
> > dispatched so EAUG couldn't be issued in response to #PF.
> >
> > I still think that, if the kernel wants to support #PF-driven EAUG, it
> > should be an opt-in thing.  It would be something like
> > SGX_IOC_ADD_LAZY_EAUG_PAGES or similar.  If it's done that way, then
> > the driver needs to learn how to track ranges of pages efficiently,
> > which is another reason to consider leaving all the fancy page / page
> > range tracking in the driver.
> >
> > I don't think it's a good idea for a page fault on any non-EADDed page
> > in ELRANGE to automatically populate the page.
>
> I'm with you. The user code shall be explicit on which range to EAUG pages upon #PF. What I'm saying is, a range has to be mapped before the driver could receive #PFs (in the form of vm_ops->fault callbacks). But Sean's series doesn’t support that because no pages can be mapped before coming into existence.
>
> For "page tracking", what information to track is LSM dependent, so it may run into problems if different LSMs want to track different things. And that's the major reason I think it should be done inside LSM.
>
> Besides, the current page tracking structure in the driver is page oriented and its sole purpose is to serve #PFs. Page protection is better tracked using range oriented structures. Those 2 are orthogonal. It wouldn't reduce the complexity of the whole kernel by moving it into SGX driver.

It seems to me that the driver is going to need to improve its data
structures to track ranges of pages regardless of any LSM issues.  If
we're going to have an enclave with a huge ELRANGE and we're going to
mark some large subset of the full ELRANGE as allocate-on-demand, then
we are going to want to track that range in some efficient way.  It
could be a single extent or a set of power-of-two-sized extents (i.e.
radix tree entries), or something else, but a list of pages, of which
some are marked not-yet-allocated, isn't going to cut it.

Once that happens, it seems natural to put whatever permission
tracking we need into the same data structure.  That's why my proposal
had the driver getting coarse-grained info from LSM ("may execute
dirty page", for example) rather than asking the LSM to track the
whole state machine.

Does that make sense?
Xing, Cedric July 2, 2019, 6:35 a.m. UTC | #24
> From: Andy Lutomirski [mailto:luto@kernel.org]
> Sent: Monday, July 1, 2019 7:29 PM
> 
> On Mon, Jul 1, 2019 at 12:56 PM Xing, Cedric <cedric.xing@intel.com> wrote:
> >
> > > From: Andy Lutomirski [mailto:luto@kernel.org]
> > > Sent: Monday, July 01, 2019 12:36 PM
> > >
> > > On Mon, Jul 1, 2019 at 11:31 AM Xing, Cedric <cedric.xing@intel.com>
> > > wrote:
> > > > I intended to say the major reason I objected Sean's approach was its
> > > inability to support SGX2 smoothly - as #PF driven EAUG requires non-
> > > existent pages to be mmap()'ed, otherwise vm_ops->fault wouldn't be
> > > dispatched so EAUG couldn't be issued in response to #PF.
> > >
> > > I still think that, if the kernel wants to support #PF-driven EAUG, it
> > > should be an opt-in thing.  It would be something like
> > > SGX_IOC_ADD_LAZY_EAUG_PAGES or similar.  If it's done that way, then
> > > the driver needs to learn how to track ranges of pages efficiently,
> > > which is another reason to consider leaving all the fancy page / page
> > > range tracking in the driver.
> > >
> > > I don't think it's a good idea for a page fault on any non-EADDed page
> > > in ELRANGE to automatically populate the page.
> >
> > I'm with you. The user code shall be explicit on which range to EAUG pages upon #PF.
> What I'm saying is, a range has to be mapped before the driver could receive #PFs (in the
> form of vm_ops->fault callbacks). But Sean's series doesn’t support that because no pages
> can be mapped before coming into existence.
> >
> > For "page tracking", what information to track is LSM dependent, so it may run into
> problems if different LSMs want to track different things. And that's the major reason I
> think it should be done inside LSM.
> >
> > Besides, the current page tracking structure in the driver is page oriented and its sole
> purpose is to serve #PFs. Page protection is better tracked using range oriented
> structures. Those 2 are orthogonal. It wouldn't reduce the complexity of the whole kernel
> by moving it into SGX driver.
> 
> It seems to me that the driver is going to need to improve its data
> structures to track ranges of pages regardless of any LSM issues.  If
> we're going to have an enclave with a huge ELRANGE and we're going to
> mark some large subset of the full ELRANGE as allocate-on-demand, then
> we are going to want to track that range in some efficient way.  It
> could be a single extent or a set of power-of-two-sized extents (i.e.
> radix tree entries), or something else, but a list of pages, of which
> some are marked not-yet-allocated, isn't going to cut it.
> 
> Once that happens, it seems natural to put whatever permission
> tracking we need into the same data structure.  That's why my proposal
> had the driver getting coarse-grained info from LSM ("may execute
> dirty page", for example) rather than asking the LSM to track the
> whole state machine.
> 
> Does that make sense?

The driver will eventually need some range oriented structures for managing EAUGs. But it doesn't necessarily have to be in the same structure as other per-page information. After all, they are touched by different components of the driver and indeed pretty orthogonal. Evils are always in the details. It may be counter-intuitive but per our prototype years ago, it would be simpler to just keep them separate. 

IIUC, your idea is in fact keeping a FSM inside SGX driver and using return values from security_enclave_load() to argument it. That means LSM has to work quite "closely" with SGX driver (i.e. LSM needs to understand the FSM in SGX driver), which is quite different than all other existing hooks, which basically make binary decisions only. And I'm not sure how to chain LSMs if there are multiple active at the same time. Auditing is also a problem, as you can't generate audit log at the time a real mprotect() request is approved/denied. UAPI is also unpleasant as the enclave loader has to "predict" the maximal protection, which is not always available to the loader at load time, or significant changes to enclave build tools would be necessary. I think the FSM is really part of the policy and internal to LSM (or more particularly, SELinux, as different LSM modules may have different FSMs), so it still makes more sense to me to keep "LSM internals" internal to LSM.
Xing, Cedric July 2, 2019, 7:42 a.m. UTC | #25
> From: Casey Schaufler [mailto:casey@schaufler-ca.com]
> Sent: Monday, July 1, 2019 4:12 PM
> 
> On 7/1/2019 2:45 PM, Xing, Cedric wrote:
> >> From: linux-sgx-owner@vger.kernel.org [mailto:linux-sgx-
> >> owner@vger.kernel.org] On Behalf Of Casey Schaufler
> >> Sent: Monday, July 01, 2019 12:54 PM
> >>> If you understand the purpose,
> >> The purpose is to support the SGX hardware, is it not?
> >> If you don't have SGX hardware (e.g. MIPS, ARM, s390) you don't need
> >> this code.
> > No, it is NOT to support SGX
> 
> Then what *is* it for?
> 
> >  - i.e. SGX doesn't require this piece of code to work.
> >
> > And as Dr. Greg pointed out, it can be used for other TEEs than SGX.
> 
> That sure makes it sound like it's for SGX to me.

I meant it is generic and potentially useful to more TEEs but so far useful to SGX, which is the first technology that uses this infrastructure. I hope that makes more sense.

> 
> >  It doesn't contain SGX h/w specifics.
> 
> I never said it did. But no one ever suggested doing anything
> here before SGX, and your subject line:
> 
> 	"x86/sgx: Add SGX specific LSM hooks"
> 
> says it does.

Yes. And in the commit message I also stated that the need for such tracking structure is due to the unique lifespan of enclave pages. Hence this infrastructure will also be useful for other TEEs whose pages share the same lifespan.
 
> 
> > It is compiled out because there's no module calling it on other architectures today.
> But it doesn't conflict with any h/w and may be useful (for other TEEs) on other
> architectures in future.
> >
> >>> then why are you objecting the lsm_ prefix as they are APIs to be used
> >> by all LSM modules?
> >>
> >> We name interfaces based on what they provide, not who consumes them.
> >> As your code provides enclave services, that is how they should be named.
> >>
> >>>  Or what should be the prefix in your mind?
> >> I'm pretty sure that I've consistently suggested "enclave".
> >>
> >>> Or what kind of APIs do you think can qualify the lsm_ prefix?
> >> Code that implements the LSM infrastructure. There is one LSM blob
> >> allocation interface, lsm_inode_alloc(), that is used in early set-up
> >> that is exported. As I've mentioned more than once, enclave/page
> >> management is not an LSM infrastructure function, it's a memory
> >> management function.
> > It doesn't manage anything.
> 
> Sorry, "memory management" as in all that stuff around pages and
> TLBs and who gets what pages, as opposed to keeping track of anything
> on its own.
> 
> > The reason it appears in the infrastructure is because the decision of inserting an EMA
> depends on the decisions from *all* active LSMs.
> 
> You have not been listening. Most LSMs use VFS. We haven't rolled VFS
> functions into the LSM infrastructure.
> 
> > That is NOT new either, as you can see it in security_file_permission() and
> security_vm_enough_memory_mm(), both do something after all LSM modules make their
> decisions.
> 
> Did you look to see what it is they're doing? If you had,
> you would see that is nothing like what you're proposing.

I feel like we are talking about different things. I said those hooks did more than just calling registered hooks. And security_enclave_load() is similar to them, also for a similar reason - something needs to be done after *all* LSM modules make their decisions. 

I'm not sure what you are talking about.

> 
> 
> > Would you please explain why you don't see those as problems but calling EMA functions
> in security_enclave_load() is a problem?
> 
> The enclave code should be calling security_enclave_load(),
> not the other way around. That assumes you're using the naming
> convention correctly.

Yes. The enclave code (SGX driver) calls security_enclave_load/init. Never the other way around.

Again, EMA code is similar to auditing code. It is supposed to be called by LSM modules. 

> 
> security_vm_enough_memory_mm() was discussed at length and there
> wasn't a clean way to get the logic right without putting the code
> here. security_file_permission() has the fs_notify_perm call for
> similar reasons. Neither is anything like what you're suggesting.

Guess we are discussing "at length" right now on how to get the logic right. I'm not sure why "neither is anything like what I'm suggesting". 

> 
> 
> >>> And I'd like to clarify that it doesn't break anything, but is just a
> >> bit different, in that security_enclave_load() and security_file_free()
> >> call into those APIs.
> >>
> >> There should be nothing in security_enclave_load() except calls to the
> >> enclave_load() hooks (e.g. selinux_enclave_load()). There should be
> >> nothing in security_file_free() except file blob management calls to the
> >> file_free() hooks (e.g. apparmor_file_free()).
> > As above, there are examples in security/security.c where the hook does more than just
> calling registered hooks from LSMs.
> 
> And as I've said, that doesn't matter. You're still going about
> using the LSM infrastructure backwards.
> 
> >>> But there's a need for them because otherwise code/data would have to
> >>> be duplicated among LSMs
> >> There's plenty of code duplication among the LSMs, because a lot of what
> >> they do is the same thing. Someday there may be an effort to address
> >> some of that, but I don't think it's on anybody's radar.
> >> As for data duplication, there's a reason we use lots of pointers.
> > As stated above, security_enclave_load() needs to do something extra after all LSMs make
> their decisions. How can pointers help here?
> 
> I can explain it, but you clearly have no interest in doing
> anything to make your code fit into the system. I have a lot
> of other things to be doing.

I'm so interested in getting things fit. Just that what I see fit doesn't seem fit from your perspective.

> 
> >>> and the logic would be harder to comprehend.
> >> Keeping the layering clean is critical to comprehension.
> >> There's a lot of VFS code that could have been implemented within the
> >> LSM infrastructure, but I don't think that anyone would argue that it
> >> should have been.
> >>
> >>> So that's a trade-off.
> >> I remain completely unconvinced that your proposal represents a good way
> >> to implement you scheme.
> >>
> >>> Then what's the practical drawback of doing that?
> >> Name space pollution.
> > Alright, I can fix the names.
> 
> Good!
> 
> 
> >> Layering violation.
> > Not sure what you are referring to.
> 
> The only places where the blob freed by security_file_free()
> may be allocated is security_file_alloc(). The security modules
> are welcome to do anything they like in addition, provided
> they clean up after themselves in their file_free() hooks.

Exactly!

Like I said, allocation could happen in security_file_alloc() but I did it in security_enclave_load() to avoid unnecessary allocation for non-enclaves.

I know "security" looks close to "selinux" but I beg your attention in the function names. Whatever allocated inside SELinux *never* gets freed by the infrastructure, except those implicit allocations due to EMA splits.
 
> 
> If SELinux wants to support controls on enclave information,
> and that requires additional data, SELinux should include
> space in its file blob for that information, or a pointer to
> the place where the enclave code is maintaining it.
> 
> That's the way audit works.

I have to repeat myself. This was what v1 does.

The drawback is, there could be multiple LSMs active at the same time. And an EMA is inserted iff *all* LSMs have approved it. Thus the actual insertion is now done at the end of security_enclave_load(). That makes the logic clear and less error prone, and saves code that'd be duplicated into multiple LSMs otherwise. And that's why I cited security_file_permission()/security_vm_enough_memory() as precedence to security_enclave_load().

> 
> > If you are referring to buffers allocated in one layer and freed in elsewhere, you have
> got the code wrong. Buffers allocated in security_enclave_load() is freed in
> security_file_free().
> 
> It's up to the security module's file_free() to clean up anything that
> wasn't allocated in security_file_free(). Interested security modules
> should call enclave_load(), and put the information into their portion
> of the security blob. The module specific code can call enclave_file_free(),
> or whatever interface you want to provide, to clean up. That might take
> place in file_free(), but it also might be elsewhere.

Would you please take a closer look at my code? Whatever I added to SELinux did *not* allocate anything! Or are you talking about something else?

> 
> 
> > Whatever else allocated in LSMs are not seen or taken care of by the infrastructure. The
> purpose of allocating EMAs in enclave_load() is trying to minimize overhead for non-
> enclave files, otherwise it could be done in file_alloc() to be more "paired" with
> file_free(). But I don't see it necessary.
> 
> Try looking at maintaining what you've put into the LSM code as
> a separate entity. It makes it simpler. Really.

It is already a separate entity. It has its own header and own C file.

> 
> >> Architecture specific implementation detail in a general infrastructure.
> > Stated earlier, it doesn't contain any h/w specifics but just a TEE abstraction.
> 
> Then put it in the TEE system.

It's LSM's abstraction of TEE - i.e., it tracks what matters to LSM only. TEE doesn't care. It just provides information and asks for a decision at return. That's how LSM works.

> 
> > It could be left on all the time or controlled by a different config macro.
> 
> True in any case.
> 
> > It is contingent to CONFIG_INTEL_SGX just for convenience, as SGX is the first (and only
> so far) TEE that needs attention from LSM, but there could be more in future.
> 
> All the more reason to keep it separate. These things never get simpler
> when they get more generalized.

I have a hard time understanding what you mean by "separate". 

> 
> >>> If no, why would we want to pay for the cost for not doing that?
> >> Modularity and maintainability come directly to mind.
> > Putting it elsewhere will incur more maintenance cost.
> 
> I don't believe that for a second. 40 years of C programming
> have taught me that trying to do multiple things in one place
> is always a bad idea.

Agreed. I'm doing only one thing.

> 
> 
> >>>> ... and the notion that you allocate data for one blob that gets
> >>>> freed relative to another breaks the data management model.
> >>> What do you mean here?
> >> You're freeing the EMA data from security_file_free().
> >> If selinux wants to free EMA data it has allocated in
> >> selinux_enclave_load() in selinux_file_free() that's fine, but the LSM
> >> infrastructure has no need to know about it.
> >> EMA needs to manage its own data, just like VFS does.
> >> The LSM infrastructure provides blob management so that the security
> >> modules can extend data if they want to.
> > You've got the code wrong. selinux_enclave_load() doesn't allocate any memory.
> selinux_file_mprotect() may, due to EMA split. But that's transparent to all LSMs.
> 
> ... and the LSM infrastructure doesn't care and
> must not be made to care. It's all up to SELinux.

It doesn't care the decisions. But it assists in maintaining information on which decisions (from multiple LSMs) are based. 

> 
> > The LSM infrastructure doesn't know anything about what LSM modules do, nor does it
> manage any buffers allocated by any LSM modules.
> 
> Right, which is why putting your lsm_ema_blob is wrong, and why
> forcing into the file blob is wrong.

lsm_ema_blob is NOT part of file blob.

> 
> > EMA is currently managing its own data. What's needed is the trigger - to let EMA know
> when to update its states. The trigger could be placed in LSM infrastructure or inside
> individual LSMs.
> 
> Yes. The latter.
> 
> > The reason to put it in the infrastructure, is that it depends on the decision of *all*
> LSMs whether to insert a new EMA.
> 
> That's basic stacking behavior. "Bail on fail", which says that once
> denial is detected, you're done.

Who does the insertion on success, if not the LSM infrastructure? This is again similar to security_file_permission/security_vm_enough_memory. The last step is done by the infrastructure on success.

> 
> > That's similar to vm_enough_memory() where the final __vm_enough_memory() call is made
> by the infrastructure but not individual LSMs.
> 
> Do you really understand the painful reasons that case is required?
> And if so, why you aren't taking steps to avoid them?

I think I've run into the same painful reasons. 

Honestly, I tried not to do anything more than just a call_int_hooks. But I realized that'd make thing much more complicated and error prone in multiple-active-LSM cases.

So I think I've run into the same painful reasons. And I don't see any actionable suggestions from you so far.

> 
> 
> >>> EMA blobs are allocated/freed *not* relative to any other blobs.
> >> In the code you proposed they are freed in security_file_free().
> >> That is for file blob management.
> > Yes. EMA contributes to the file blob. But it only frees memory allocated by the
> infrastructure itself, not anything from any LSM modules.
> 
> That's not the way it's supposed to be done. The module tells
> the infrastructure what it needs, which may include space for
> EMA data. The module asks EMA for the data it needs and stuffs
> it somewhere, and the file blob is a fine choice. The module
> cleans up in file_free, or at any time before that. If no module
> uses EMA, nothing goes in the blob. If two modules use EMA each
> is responsible for the data it uses, which may be the same or
> may be different.
> 
> I've looked at your code. Making it work the way it should would
> not be difficult and would likely simplify a bunch of it.

Guess this discussion will never end if we don't get into code. Guess it'd be more productive to talk over phone then come back to this thread with a conclusion. Will that be ok with you?
Casey Schaufler July 2, 2019, 3:44 p.m. UTC | #26
On 7/2/2019 12:42 AM, Xing, Cedric wrote:
> ...
> Guess this discussion will never end if we don't get into code. Guess it'd be more productive to talk over phone then come back to this thread with a conclusion. Will that be ok with you?

I don't think that a phone call is going to help. Talking
code issues tends to muddle them in my brain. If you can give
me a few days I will propose a rough version of how I think
your code should be integrated into the LSM environment. I'm
spending more time trying (unsuccessfully :( ) to discribe
the issues in English than it will probably take in C.
Dr. Greg July 3, 2019, 9:46 a.m. UTC | #27
On Tue, Jul 02, 2019 at 08:44:40AM -0700, Casey Schaufler wrote:

Good morning, I hope this note finds the week going well for everyone.

> On 7/2/2019 12:42 AM, Xing, Cedric wrote:
> > ...
> > Guess this discussion will never end if we don't get into
> > code. Guess it'd be more productive to talk over phone then come back
> > to this thread with a conclusion. Will that be ok with you?

> I don't think that a phone call is going to help. Talking code
> issues tends to muddle them in my brain. If you can give me a few
> days I will propose a rough version of how I think your code should
> be integrated into the LSM environment. I'm spending more time
> trying (unsuccessfully :( ) to discribe the issues in English than
> it will probably take in C.

While Casey is off writing his rosetta stone, let me suggest that the
most important thing we need to do is to take a little time, step back
and look at the big picture with respect to what we are trying to
accomplish and if we are going about it in a way that makes any sense
from an engineering perspective.

This conversation shouldn't be about SGX, it should be about the best
way for the kernel/LSM to discipline a Trusted Execution Environment
(TEE).  As I have noted previously, a TEE is a 'blackbox' that, by
design, is intended to allow execution of code and processing of data
in a manner that is resistant to manipulation or inspection by
untrusted userspace, the kernel and/or the hardware itself.

Given that fact, if we are to be intellectually honest, we need to ask
ourselves how effective we believe we can be in controlling any TEE
with kernel based mechanisms.  This is particularly the case if the
author of any code running in the TEE has adversarial intent.

Here is the list of controls that we believe an LSM can, effectively,
implement against a TEE:

1.) Code provenance and origin.

2.) Cryptographic verification of dynamically executable content.

2.) The ability of a TEE to implement anonymous executable content.

If people are in agreement with this concept, it is difficult to
understand why we should be implementing complex state machines and
the like, whether it is in the driver or the LSM.  Security code has
to be measured with a metric of effectiveness, otherwise we are
engaging in security theater.

I believe that if we were using this lens, we would already have a
mainline SGX driver, since we seem to have most of the needed LSM
infrastructure and any additional functionality would be a straight
forward implementation.  Most importantly, the infrastructure would
not be SGX specific, which would seem to be a desirable political
concept.

If we are not willing to engage in this discussion we are going to end
up with a quasi-technology specific solution that isn't implementing
any relevant security guarantees.

FWIW, we wouldn't even be having this, now lengthy discussion, if I
wouldn't have aggressively advocated, starting last November, that an
SGX driver needed some form of execution control if there was a desire
for the technology to not pose a security risk to the platform.  So
humor me a little bit.... :-)

Best wishes for a productive remainder of the week to everyone.

Dr. Greg

As always,
Dr. Greg Wettstein, Ph.D, Worker
IDfusion, LLC
4206 N. 19th Ave.           Implementing measured information privacy
Fargo, ND  58102            and integrity architectures.
PH: 701-281-1686
FAX: 701-281-3949           EMAIL: greg@idfusion.net
------------------------------------------------------------------------------
"... remember that innovation is saying 'no' to 1000 things."
Casey Schaufler July 3, 2019, 3:32 p.m. UTC | #28
On 7/3/2019 2:46 AM, Dr. Greg wrote:
> On Tue, Jul 02, 2019 at 08:44:40AM -0700, Casey Schaufler wrote:
>
> Good morning, I hope this note finds the week going well for everyone.
>
>> On 7/2/2019 12:42 AM, Xing, Cedric wrote:
>>> ...
>>> Guess this discussion will never end if we don't get into
>>> code. Guess it'd be more productive to talk over phone then come back
>>> to this thread with a conclusion. Will that be ok with you?
>> I don't think that a phone call is going to help. Talking code
>> issues tends to muddle them in my brain. If you can give me a few
>> days I will propose a rough version of how I think your code should
>> be integrated into the LSM environment. I'm spending more time
>> trying (unsuccessfully :( ) to discribe the issues in English than
>> it will probably take in C.
> While Casey is off writing his rosetta stone,

I'd hardly call it that. More of an effort to round the
corners on the square peg. And Cedric has some ideas on
how to approach that.

> let me suggest that the
> most important thing we need to do is to take a little time, step back
> and look at the big picture with respect to what we are trying to
> accomplish and if we are going about it in a way that makes any sense
> from an engineering perspective.
>
> This conversation shouldn't be about SGX, it should be about the best
> way for the kernel/LSM to discipline a Trusted Execution Environment
> (TEE).  As I have noted previously, a TEE is a 'blackbox' that, by
> design, is intended to allow execution of code and processing of data
> in a manner that is resistant to manipulation or inspection by
> untrusted userspace, the kernel and/or the hardware itself.
>
> Given that fact, if we are to be intellectually honest, we need to ask
> ourselves how effective we believe we can be in controlling any TEE
> with kernel based mechanisms.  This is particularly the case if the
> author of any code running in the TEE has adversarial intent.
>
> Here is the list of controls that we believe an LSM can, effectively,
> implement against a TEE:
>
> 1.) Code provenance and origin.
>
> 2.) Cryptographic verification of dynamically executable content.
>
> 2.) The ability of a TEE to implement anonymous executable content.
>
> If people are in agreement with this concept, it is difficult to
> understand why we should be implementing complex state machines and
> the like, whether it is in the driver or the LSM.  Security code has
> to be measured with a metric of effectiveness, otherwise we are
> engaging in security theater.
>
> I believe that if we were using this lens, we would already have a
> mainline SGX driver, since we seem to have most of the needed LSM
> infrastructure and any additional functionality would be a straight
> forward implementation.  Most importantly, the infrastructure would
> not be SGX specific, which would seem to be a desirable political
> concept.

Generality introduced in the absence of multiple instances
often results in unnecessary complexity, unused interfaces
and feature compromise. Guessing what other TEE systems might
do, and constraining SGX to those models (or the other way around)
is a well established road to ruin. The LSM infrastructure is
a fine example. For the first ten years the "general" mechanism
had a single user. I'd say to hold off on the general until there
is more experience with the specific. It's easier to construct
a general mechanism around things that work than to fit things
that need to work into some preconceived notion of generality. 

>
> If we are not willing to engage in this discussion we are going to end
> up with a quasi-technology specific solution that isn't implementing
> any relevant security guarantees.
>
> FWIW, we wouldn't even be having this, now lengthy discussion, if I
> wouldn't have aggressively advocated, starting last November, that an
> SGX driver needed some form of execution control if there was a desire
> for the technology to not pose a security risk to the platform.  So
> humor me a little bit.... :-)
>
> Best wishes for a productive remainder of the week to everyone.
>
> Dr. Greg
>
> As always,
> Dr. Greg Wettstein, Ph.D, Worker
> IDfusion, LLC
> 4206 N. 19th Ave.           Implementing measured information privacy
> Fargo, ND  58102            and integrity architectures.
> PH: 701-281-1686
> FAX: 701-281-3949           EMAIL: greg@idfusion.net
> ------------------------------------------------------------------------------
> "... remember that innovation is saying 'no' to 1000 things."
Dr. Greg July 7, 2019, 1:30 p.m. UTC | #29
On Wed, Jul 03, 2019 at 08:32:10AM -0700, Casey Schaufler wrote:

Good morning, I hope the weekend has been enjoyable for everyone.

> >> On 7/2/2019 12:42 AM, Xing, Cedric wrote:
> >>> ...
> >>> Guess this discussion will never end if we don't get into
> >>> code. Guess it'd be more productive to talk over phone then come back
> >>> to this thread with a conclusion. Will that be ok with you?
> >> I don't think that a phone call is going to help. Talking code
> >> issues tends to muddle them in my brain. If you can give me a few
> >> days I will propose a rough version of how I think your code should
> >> be integrated into the LSM environment. I'm spending more time
> >> trying (unsuccessfully :( ) to discribe the issues in English than
> >> it will probably take in C.
> > While Casey is off writing his rosetta stone,

> I'd hardly call it that. More of an effort to round the corners on
> the square peg. And Cedric has some ideas on how to approach that.

Should we infer from this comment that, of the two competing
strategies, Cedric's is the favored architecture?

> > let me suggest that the
> > most important thing we need to do is to take a little time, step back
> > and look at the big picture with respect to what we are trying to
> > accomplish and if we are going about it in a way that makes any sense
> > from an engineering perspective.
> >
> > This conversation shouldn't be about SGX, it should be about the best
> > way for the kernel/LSM to discipline a Trusted Execution Environment
> > (TEE).  As I have noted previously, a TEE is a 'blackbox' that, by
> > design, is intended to allow execution of code and processing of data
> > in a manner that is resistant to manipulation or inspection by
> > untrusted userspace, the kernel and/or the hardware itself.
> >
> > Given that fact, if we are to be intellectually honest, we need to ask
> > ourselves how effective we believe we can be in controlling any TEE
> > with kernel based mechanisms.  This is particularly the case if the
> > author of any code running in the TEE has adversarial intent.
> >
> > Here is the list of controls that we believe an LSM can, effectively,
> > implement against a TEE:
> >
> > 1.) Code provenance and origin.
> >
> > 2.) Cryptographic verification of dynamically executable content.
> >
> > 2.) The ability of a TEE to implement anonymous executable content.
> >
> > If people are in agreement with this concept, it is difficult to
> > understand why we should be implementing complex state machines and
> > the like, whether it is in the driver or the LSM.  Security code has
> > to be measured with a metric of effectiveness, otherwise we are
> > engaging in security theater.
> >
> > I believe that if we were using this lens, we would already have a
> > mainline SGX driver, since we seem to have most of the needed LSM
> > infrastructure and any additional functionality would be a straight
> > forward implementation.  Most importantly, the infrastructure would
> > not be SGX specific, which would seem to be a desirable political
> > concept.

> Generality introduced in the absence of multiple instances
> often results in unnecessary complexity, unused interfaces
> and feature compromise. Guessing what other TEE systems might
> do, and constraining SGX to those models (or the other way around)
> is a well established road to ruin. The LSM infrastructure is
> a fine example. For the first ten years the "general" mechanism
> had a single user. I'd say to hold off on the general until there
> is more experience with the specific. It's easier to construct
> a general mechanism around things that work than to fit things
> that need to work into some preconceived notion of generality. 

All well taken points from an implementation perspective, but they
elide the point I was trying to make.  Which is the fact that without
any semblance of a discussion regarding the requirements needed to
implement a security architecture around the concept of a TEE, this
entire process, despite Cedric's well intentioned efforts, amounts to
pounding a square solution into the round hole of a security problem.

Which, as I noted in my e-mail, is tantamount to security theater.

Everyone wants to see this driver upstream.  If we would have had a
reasoned discussion regarding what it means to implement proper
controls around a TEE, when we started to bring these issues forward
last November, we could have possibly been on the road to having a
driver with reasoned security controls and one that actually delivers
the security guarantees the hardware was designed to deliver.

Best wishes for a productive week to everyone.

Dr. Greg

As always,
Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
4206 N. 19th Ave.           Specializing in information infra-structure
Fargo, ND  58102            development.
PH: 701-281-1686            EMAIL: greg@enjellic.com
------------------------------------------------------------------------------
"Any intelligent fool can make things bigger and more complex... It
 takes a touch of genius - and a lot of courage to move in the opposite
 direction."
                                -- Albert Einstein
Casey Schaufler July 9, 2019, 12:02 a.m. UTC | #30
On 7/7/2019 6:30 AM, Dr. Greg wrote:
> On Wed, Jul 03, 2019 at 08:32:10AM -0700, Casey Schaufler wrote:
>
> Good morning, I hope the weekend has been enjoyable for everyone.
>
>>>> On 7/2/2019 12:42 AM, Xing, Cedric wrote:
>>>>> ...
>>>>> Guess this discussion will never end if we don't get into
>>>>> code. Guess it'd be more productive to talk over phone then come back
>>>>> to this thread with a conclusion. Will that be ok with you?
>>>> I don't think that a phone call is going to help. Talking code
>>>> issues tends to muddle them in my brain. If you can give me a few
>>>> days I will propose a rough version of how I think your code should
>>>> be integrated into the LSM environment. I'm spending more time
>>>> trying (unsuccessfully :( ) to discribe the issues in English than
>>>> it will probably take in C.
>>> While Casey is off writing his rosetta stone,
>> I'd hardly call it that. More of an effort to round the corners on
>> the square peg. And Cedric has some ideas on how to approach that.
> Should we infer from this comment that, of the two competing
> strategies, Cedric's is the favored architecture?

With Cedric's latest patches I'd say there's only one
strategy. There's still some refinement to do, but we're
getting there.


>>> let me suggest that the
>>> most important thing we need to do is to take a little time, step back
>>> and look at the big picture with respect to what we are trying to
>>> accomplish and if we are going about it in a way that makes any sense
>>> from an engineering perspective.
>>>
>>> This conversation shouldn't be about SGX, it should be about the best
>>> way for the kernel/LSM to discipline a Trusted Execution Environment
>>> (TEE).  As I have noted previously, a TEE is a 'blackbox' that, by
>>> design, is intended to allow execution of code and processing of data
>>> in a manner that is resistant to manipulation or inspection by
>>> untrusted userspace, the kernel and/or the hardware itself.
>>>
>>> Given that fact, if we are to be intellectually honest, we need to ask
>>> ourselves how effective we believe we can be in controlling any TEE
>>> with kernel based mechanisms.  This is particularly the case if the
>>> author of any code running in the TEE has adversarial intent.
>>>
>>> Here is the list of controls that we believe an LSM can, effectively,
>>> implement against a TEE:
>>>
>>> 1.) Code provenance and origin.
>>>
>>> 2.) Cryptographic verification of dynamically executable content.
>>>
>>> 2.) The ability of a TEE to implement anonymous executable content.
>>>
>>> If people are in agreement with this concept, it is difficult to
>>> understand why we should be implementing complex state machines and
>>> the like, whether it is in the driver or the LSM.  Security code has
>>> to be measured with a metric of effectiveness, otherwise we are
>>> engaging in security theater.
>>>
>>> I believe that if we were using this lens, we would already have a
>>> mainline SGX driver, since we seem to have most of the needed LSM
>>> infrastructure and any additional functionality would be a straight
>>> forward implementation.  Most importantly, the infrastructure would
>>> not be SGX specific, which would seem to be a desirable political
>>> concept.
>> Generality introduced in the absence of multiple instances
>> often results in unnecessary complexity, unused interfaces
>> and feature compromise. Guessing what other TEE systems might
>> do, and constraining SGX to those models (or the other way around)
>> is a well established road to ruin. The LSM infrastructure is
>> a fine example. For the first ten years the "general" mechanism
>> had a single user. I'd say to hold off on the general until there
>> is more experience with the specific. It's easier to construct
>> a general mechanism around things that work than to fit things
>> that need to work into some preconceived notion of generality. 
> All well taken points from an implementation perspective, but they
> elide the point I was trying to make.  Which is the fact that without
> any semblance of a discussion regarding the requirements needed to
> implement a security architecture around the concept of a TEE, this
> entire process, despite Cedric's well intentioned efforts, amounts to
> pounding a square solution into the round hole of a security problem.

Lead with code. I love a good requirements document, but
one of the few places where I agree with the agile folks is
that working code speaks loudly.

> Which, as I noted in my e-mail, is tantamount to security theater.

Not buying that. Not rejecting it, either. Without code
to judge it's kind of hard to say.

> Everyone wants to see this driver upstream.  If we would have had a
> reasoned discussion regarding what it means to implement proper
> controls around a TEE, when we started to bring these issues forward
> last November, we could have possibly been on the road to having a
> driver with reasoned security controls and one that actually delivers
> the security guarantees the hardware was designed to deliver.
>
> Best wishes for a productive week to everyone.
>
> Dr. Greg
>
> As always,
> Dr. G.W. Wettstein, Ph.D.   Enjellic Systems Development, LLC.
> 4206 N. 19th Ave.           Specializing in information infra-structure
> Fargo, ND  58102            development.
> PH: 701-281-1686            EMAIL: greg@enjellic.com
> ------------------------------------------------------------------------------
> "Any intelligent fool can make things bigger and more complex... It
>  takes a touch of genius - and a lot of courage to move in the opposite
>  direction."
>                                 -- Albert Einstein
Sean Christopherson July 9, 2019, 1:52 a.m. UTC | #31
On Mon, Jul 08, 2019 at 05:02:00PM -0700, Casey Schaufler wrote:
> On 7/7/2019 6:30 AM, Dr. Greg wrote:
> > On Wed, Jul 03, 2019 at 08:32:10AM -0700, Casey Schaufler wrote:
> >
> > Good morning, I hope the weekend has been enjoyable for everyone.
> >
> >>>> On 7/2/2019 12:42 AM, Xing, Cedric wrote:
> >>>>> ...
> >>>>> Guess this discussion will never end if we don't get into
> >>>>> code. Guess it'd be more productive to talk over phone then come back
> >>>>> to this thread with a conclusion. Will that be ok with you?
> >>>> I don't think that a phone call is going to help. Talking code
> >>>> issues tends to muddle them in my brain. If you can give me a few
> >>>> days I will propose a rough version of how I think your code should
> >>>> be integrated into the LSM environment. I'm spending more time
> >>>> trying (unsuccessfully :( ) to discribe the issues in English than
> >>>> it will probably take in C.
> >>> While Casey is off writing his rosetta stone,
> >> I'd hardly call it that. More of an effort to round the corners on
> >> the square peg. And Cedric has some ideas on how to approach that.
> > Should we infer from this comment that, of the two competing
> > strategies, Cedric's is the favored architecture?
> 
> With Cedric's latest patches I'd say there's only one
> strategy. There's still some refinement to do, but we're
> getting there.

Dynamic tracking has an unsolvable race condition.  If process A maps a
page W and process B maps the same page X, then the result of W^X checks
depends on the order of mprotect() calls between A and B.

If we're ok saying "don't do that" then I can get behind dynamic tracking
as a whole.  Even if we settle on dynamic tracking, where that tracking
code lives is still an open IMO.
Xing, Cedric July 9, 2019, 9:16 p.m. UTC | #32
On 7/8/2019 6:52 PM, Sean Christopherson wrote:
> On Mon, Jul 08, 2019 at 05:02:00PM -0700, Casey Schaufler wrote:
>> On 7/7/2019 6:30 AM, Dr. Greg wrote:
>>> On Wed, Jul 03, 2019 at 08:32:10AM -0700, Casey Schaufler wrote:
>>>
>>> Good morning, I hope the weekend has been enjoyable for everyone.
>>>
>>>>>> On 7/2/2019 12:42 AM, Xing, Cedric wrote:
>>>>>>> ...
>>>>>>> Guess this discussion will never end if we don't get into
>>>>>>> code. Guess it'd be more productive to talk over phone then come back
>>>>>>> to this thread with a conclusion. Will that be ok with you?
>>>>>> I don't think that a phone call is going to help. Talking code
>>>>>> issues tends to muddle them in my brain. If you can give me a few
>>>>>> days I will propose a rough version of how I think your code should
>>>>>> be integrated into the LSM environment. I'm spending more time
>>>>>> trying (unsuccessfully :( ) to discribe the issues in English than
>>>>>> it will probably take in C.
>>>>> While Casey is off writing his rosetta stone,
>>>> I'd hardly call it that. More of an effort to round the corners on
>>>> the square peg. And Cedric has some ideas on how to approach that.
>>> Should we infer from this comment that, of the two competing
>>> strategies, Cedric's is the favored architecture?
>>
>> With Cedric's latest patches I'd say there's only one
>> strategy. There's still some refinement to do, but we're
>> getting there.
> 
> Dynamic tracking has an unsolvable race condition.  If process A maps a
> page W and process B maps the same page X, then the result of W^X checks
> depends on the order of mprotect() calls between A and B.

I don't quite understand where the term "dynamic tracking" came from. 
What's done in the patch is just to track which page was contributed by 
which file. It's been done for all file mappings in Linux.

Back to the "race condition". A similar situation already exists in 
SELinux, between EXECMOD and EXECMEM. Say a file does *not* have EXECMOD 
but the calling process has EXECMEM. Then WX could be granted to a fresh 
private mapping (because of EXECMEM). However, once the mapping has been 
written to, X should have been revoked (because of lack of EXECMOD) but 
will still be retained until dropped by an explicit mprotect(). 
Afterwards, mprotect(X) will be denied. That's not the same situation as 
in this enclave case but they do share one thing in common - X should 
have been revoked from an existing mapping but it isn't, which is just a 
policy choice.

Nothing is unsolvable. Here are 2 options.
(1) We argue that it doesn't matter, similar to the EXECMOD/EXECMEM case 
on regular file mappings described above; or
(2) EXECMOD is required for both W->X and X->W transitions, hence W 
requested by the 2nd process will be denied because X has been granted 
to the 1st process while EXECMOD is absent.

Please note that #2 is effectively the same concept as 
PROCESS2__ENCLAVE_EXECDIRTY in Sean's patch, except that EXECMOD is per 
file while ENCLAVE_EXECDIRTY is per process.
Dr. Greg July 11, 2019, 10:22 a.m. UTC | #33
On Mon, Jul 08, 2019 at 05:02:00PM -0700, Casey Schaufler wrote:

> > On 7/7/2019 6:30 AM, Dr. Greg wrote:
> > All well taken points from an implementation perspective, but they
> > elide the point I was trying to make.  Which is the fact that without
> > any semblance of a discussion regarding the requirements needed to
> > implement a security architecture around the concept of a TEE, this
> > entire process, despite Cedric's well intentioned efforts, amounts to
> > pounding a square solution into the round hole of a security problem.

> Lead with code. I love a good requirements document, but one of the
> few places where I agree with the agile folks is that working code
> speaks loudly.
>
> > Which, as I noted in my e-mail, is tantamount to security theater.
> 
> Not buying that. Not rejecting it, either. Without code
> to judge it's kind of hard to say.

We tried the code approach.

By most accounts it seemed to go poorly, given that it ended up with
Jonathan Corbet writing an LWN feature article on the state of
dysfunction and chaos surrounding Linux SGX driver development.

So we are standing around and mumbling until we can figure out what
kind of code we need to write to make the new driver relevant to the
CISO's and security architects we need to defend SGX to.

Have a good week.

Dr. Greg

As always,
Dr. Greg Wettstein, Ph.D, Worker
IDfusion, LLC               Implementing SGX secured and modeled
4206 N. 19th Ave.           intelligent network endpoints.
Fargo, ND  58102
PH: 701-281-1686            EMAIL: greg@idfusion.net
------------------------------------------------------------------------------
"Five year projections, are you kidding me.  We don't know what we are
 supposed to be doing at the 4 o'clock meeting this afternoon."
                                -- Terry Wieland
                                   Resurrection
Andy Lutomirski July 15, 2019, 10:23 p.m. UTC | #34
On Thu, Jul 11, 2019 at 3:23 AM Dr. Greg <greg@idfusion.net> wrote:
>
> On Mon, Jul 08, 2019 at 05:02:00PM -0700, Casey Schaufler wrote:
>
> > > On 7/7/2019 6:30 AM, Dr. Greg wrote:
> > > All well taken points from an implementation perspective, but they
> > > elide the point I was trying to make.  Which is the fact that without
> > > any semblance of a discussion regarding the requirements needed to
> > > implement a security architecture around the concept of a TEE, this
> > > entire process, despite Cedric's well intentioned efforts, amounts to
> > > pounding a square solution into the round hole of a security problem.
>
> > Lead with code. I love a good requirements document, but one of the
> > few places where I agree with the agile folks is that working code
> > speaks loudly.
> >
> > > Which, as I noted in my e-mail, is tantamount to security theater.
> >
> > Not buying that. Not rejecting it, either. Without code
> > to judge it's kind of hard to say.
>
> We tried the code approach.
>

You sent code.  That code did not, in any respect, address the issue
of how LSMs were supposed to control what code got executed.

Do you have an actual suggestion here that we should pay attention to?
diff mbox series

Patch

diff --git a/include/linux/lsm_ema.h b/include/linux/lsm_ema.h
new file mode 100644
index 000000000000..a09b8f96da05
--- /dev/null
+++ b/include/linux/lsm_ema.h
@@ -0,0 +1,171 @@ 
+/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) */
+/**
+ * Enclave Memory Area interface for LSM modules
+ *
+ * Copyright(c) 2016-19 Intel Corporation.
+ */
+
+#ifndef _LSM_EMA_H_
+#define _LSM_EMA_H_
+
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+
+/**
+ * lsm_ema - LSM Enclave Memory Area structure
+ *
+ * Data structure to track origins of enclave pages
+ *
+ * @link:
+ *	Link to adjacent EMAs. EMAs are sorted by their addresses in ascending
+ *	order
+ * @start:
+ *	Starting address
+ * @end:
+ *	Ending address
+ * @source:
+ *	File from which this range was loaded from, or NULL if not loaded from
+ *	any files
+ */
+struct lsm_ema {
+	struct list_head	link;
+	size_t			start;
+	size_t			end;
+	struct file		*source;
+};
+
+#define lsm_ema_data(ema, blob_sizes)	\
+	((char *)((struct lsm_ema *)(ema) + 1) + blob_sizes.lbs_ema_data)
+
+/**
+ * lsm_ema_map - LSM Enclave Memory Map structure
+ *
+ * Container for EMAs of an enclave
+ *
+ * @list:
+ *	Head of a list of sorted EMAs
+ * @lock:
+ *	Acquire before querying/updateing the list EMAs
+ */
+struct lsm_ema_map {
+	struct list_head	list;
+	struct mutex		lock;
+};
+
+/**
+ * These are functions to be used by the LSM framework, and must be defined
+ * regardless CONFIG_INTEL_SGX is enabled or not.
+ */
+
+#ifdef CONFIG_INTEL_SGX
+void lsm_ema_global_init(size_t);
+void lsm_free_ema_map(atomic_long_t *);
+#else
+static inline void lsm_ema_global_init(size_t ema_data_size)
+{
+}
+
+static inline void lsm_free_ema_map(atomic_long_t *p)
+{
+}
+#endif
+
+/**
+ * Below are APIs to be used by LSM modules
+ */
+
+struct lsm_ema_map *lsm_init_or_get_ema_map(atomic_long_t *);
+struct lsm_ema *lsm_alloc_ema(void);
+void lsm_free_ema(struct lsm_ema *);
+void lsm_init_ema(struct lsm_ema *, size_t, size_t, struct file *);
+int lsm_merge_ema(struct lsm_ema *, struct lsm_ema_map *);
+struct lsm_ema *lsm_split_ema(struct lsm_ema *, size_t, struct lsm_ema_map *);
+
+static inline struct lsm_ema_map *lsm_get_ema_map(struct file *f)
+{
+	return (void *)atomic_long_read(f->f_security);
+}
+
+static inline int __must_check lsm_lock_ema(struct lsm_ema_map *map)
+{
+	return mutex_lock_interruptible(&map->lock);
+}
+
+static inline void lsm_unlock_ema(struct lsm_ema_map *map)
+{
+	mutex_unlock(&map->lock);
+}
+
+static inline struct lsm_ema *lsm_prev_ema(struct lsm_ema *p,
+					   struct lsm_ema_map *map)
+{
+	p = list_prev_entry(p, link);
+	return &p->link == &map->list ? NULL : p;
+}
+
+static inline struct lsm_ema *lsm_next_ema(struct lsm_ema *p,
+					   struct lsm_ema_map *map)
+{
+	p = list_next_entry(p, link);
+	return &p->link == &map->list ? NULL : p;
+}
+
+static inline struct lsm_ema *lsm_find_ema(struct lsm_ema_map *map, size_t a)
+{
+	struct lsm_ema *p;
+
+	BUG_ON(!mutex_is_locked(&map->lock));
+
+	list_for_each_entry(p, &map->list, link)
+		if (a < p->end)
+			break;
+	return &p->link == &map->list ? NULL : p;
+}
+
+static inline int lsm_insert_ema(struct lsm_ema_map *map, struct lsm_ema *n)
+{
+	struct lsm_ema *p = lsm_find_ema(map, n->start);
+
+	if (!p)
+		list_add_tail(&n->link, &map->list);
+	else if (n->end <= p->start)
+		list_add_tail(&n->link, &p->link);
+	else
+		return -EEXIST;
+
+	lsm_merge_ema(n, map);
+	if (p)
+		lsm_merge_ema(p, map);
+	return 0;
+}
+
+static inline int lsm_for_each_ema(struct lsm_ema_map *map, size_t start,
+				   size_t end, int (*cb)(struct lsm_ema *,
+							 void *), void *arg)
+{
+	struct lsm_ema *ema;
+	int rc;
+
+	ema = lsm_find_ema(map, start);
+	while (ema && end > ema->start) {
+		if (start > ema->start)
+			lsm_split_ema(ema, start, map);
+		if (end < ema->end)
+			ema = lsm_split_ema(ema, end, map);
+
+		rc = (*cb)(ema, arg);
+		lsm_merge_ema(ema, map);
+		if (rc)
+			return rc;
+
+		ema = lsm_next_ema(ema, map);
+	}
+
+	if (ema)
+		lsm_merge_ema(ema, map);
+	return 0;
+}
+
+#endif /* _LSM_EMA_H_ */
diff --git a/include/linux/lsm_hooks.h b/include/linux/lsm_hooks.h
index 47f58cfb6a19..ade1f9f81e64 100644
--- a/include/linux/lsm_hooks.h
+++ b/include/linux/lsm_hooks.h
@@ -29,6 +29,8 @@ 
 #include <linux/init.h>
 #include <linux/rculist.h>
 
+struct lsm_ema;
+
 /**
  * union security_list_options - Linux Security Module hook function list
  *
@@ -1446,6 +1448,21 @@ 
  * @bpf_prog_free_security:
  *	Clean up the security information stored inside bpf prog.
  *
+ * @enclave_load:
+ *	Decide if a range of pages shall be allowed to be loaded into an
+ *	enclave
+ *
+ *	@encl points to the file identifying the target enclave
+ *	@ema specifies the target range to be loaded
+ *	@flags contains protections being requested for the target range
+ *	@source points to the VMA containing the source pages to be loaded
+ *
+ * @enclave_init:
+ *	Decide if an enclave shall be allowed to launch
+ *
+ *	@encl points to the file identifying the target enclave being launched
+ *	@sigstruct contains a copy of the SIGSTRUCT in kernel memory
+ *	@source points to the VMA backing SIGSTRUCT in user memory
  */
 union security_list_options {
 	int (*binder_set_context_mgr)(struct task_struct *mgr);
@@ -1807,6 +1824,13 @@  union security_list_options {
 	int (*bpf_prog_alloc_security)(struct bpf_prog_aux *aux);
 	void (*bpf_prog_free_security)(struct bpf_prog_aux *aux);
 #endif /* CONFIG_BPF_SYSCALL */
+
+#ifdef CONFIG_INTEL_SGX
+	int (*enclave_load)(struct file *encl, struct lsm_ema *ema,
+			    size_t flags, struct vm_area_struct *source);
+	int (*enclave_init)(struct file *encl, struct sgx_sigstruct *sigstruct,
+			    struct vm_area_struct *source);
+#endif
 };
 
 struct security_hook_heads {
@@ -2046,6 +2070,10 @@  struct security_hook_heads {
 	struct hlist_head bpf_prog_alloc_security;
 	struct hlist_head bpf_prog_free_security;
 #endif /* CONFIG_BPF_SYSCALL */
+#ifdef CONFIG_INTEL_SGX
+	struct hlist_head enclave_load;
+	struct hlist_head enclave_init;
+#endif
 } __randomize_layout;
 
 /*
@@ -2069,6 +2097,7 @@  struct lsm_blob_sizes {
 	int	lbs_ipc;
 	int	lbs_msg_msg;
 	int	lbs_task;
+	int	lbs_ema_data;
 };
 
 /*
diff --git a/include/linux/security.h b/include/linux/security.h
index 659071c2e57c..52c200810004 100644
--- a/include/linux/security.h
+++ b/include/linux/security.h
@@ -1829,5 +1829,28 @@  static inline void security_bpf_prog_free(struct bpf_prog_aux *aux)
 #endif /* CONFIG_SECURITY */
 #endif /* CONFIG_BPF_SYSCALL */
 
+#ifdef CONFIG_INTEL_SGX
+struct sgx_sigstruct;
+#ifdef CONFIG_SECURITY
+int security_enclave_load(struct file *encl, size_t start, size_t end,
+			  size_t flags, struct vm_area_struct *source);
+int security_enclave_init(struct file *encl, struct sgx_sigstruct *sigstruct,
+			  struct vm_area_struct *source);
+#else
+static inline int security_enclave_load(struct file *encl, size_t start,
+					size_t end, struct vm_area_struct *src)
+{
+	return 0;
+}
+
+static inline int security_enclave_init(struct file *encl,
+					struct sgx_sigstruct *sigstruct,
+					struct vm_area_struct *src)
+{
+	return 0;
+}
+#endif /* CONFIG_SECURITY */
+#endif /* CONFIG_INTEL_SGX */
+
 #endif /* ! __LINUX_SECURITY_H */
 
diff --git a/security/Makefile b/security/Makefile
index c598b904938f..1bab8f1344b6 100644
--- a/security/Makefile
+++ b/security/Makefile
@@ -28,6 +28,7 @@  obj-$(CONFIG_SECURITY_YAMA)		+= yama/
 obj-$(CONFIG_SECURITY_LOADPIN)		+= loadpin/
 obj-$(CONFIG_SECURITY_SAFESETID)       += safesetid/
 obj-$(CONFIG_CGROUP_DEVICE)		+= device_cgroup.o
+obj-$(CONFIG_INTEL_SGX)			+= lsm_ema.o
 
 # Object integrity file lists
 subdir-$(CONFIG_INTEGRITY)		+= integrity
diff --git a/security/lsm_ema.c b/security/lsm_ema.c
new file mode 100644
index 000000000000..68fae0724d37
--- /dev/null
+++ b/security/lsm_ema.c
@@ -0,0 +1,132 @@ 
+// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
+// Copyright(c) 2016-18 Intel Corporation.
+
+#include <linux/lsm_ema.h>
+#include <linux/slab.h>
+
+static struct kmem_cache *lsm_ema_cache;
+static size_t lsm_ema_data_size;
+static int lsm_ema_cache_decisions = 1;
+
+void lsm_ema_global_init(size_t ema_size)
+{
+	BUG_ON(lsm_ema_data_size > 0);
+
+	lsm_ema_data_size = ema_size;
+
+	ema_size += sizeof(struct lsm_ema);
+	ema_size = max(ema_size, sizeof(struct lsm_ema_map));
+	lsm_ema_cache = kmem_cache_create("lsm_ema_cache", ema_size,
+					  __alignof__(struct lsm_ema),
+					  SLAB_PANIC, NULL);
+
+}
+
+struct lsm_ema_map *lsm_init_or_get_ema_map(atomic_long_t *p)
+{
+	struct lsm_ema_map *map;
+
+	map = (typeof(map))atomic_long_read(p);
+	if (!map) {
+		long n;
+
+		map = (typeof(map))lsm_alloc_ema();
+		if (!map)
+			return NULL;
+
+		INIT_LIST_HEAD(&map->list);
+		mutex_init(&map->lock);
+
+		n = atomic_long_cmpxchg(p, 0, (long)map);
+		if (n) {
+			atomic_long_t a;
+			atomic_long_set(&a, (long)map);
+			map = (typeof(map))n;
+			lsm_free_ema_map(&a);
+		}
+	}
+	return map;
+}
+
+void lsm_free_ema_map(atomic_long_t *p)
+{
+	struct lsm_ema_map *map;
+	struct lsm_ema *ema, *n;
+
+	map = (typeof(map))atomic_long_read(p);
+	if (!map)
+		return;
+
+	BUG_ON(mutex_is_locked(&map->lock));
+
+	list_for_each_entry_safe(ema, n, &map->list, link)
+		lsm_free_ema(ema);
+	kmem_cache_free(lsm_ema_cache, map);
+}
+
+struct lsm_ema *lsm_alloc_ema(void)
+{
+	return kmem_cache_zalloc(lsm_ema_cache, GFP_KERNEL);
+}
+
+void lsm_free_ema(struct lsm_ema *ema)
+{
+	list_del(&ema->link);
+	if (ema->source)
+		fput(ema->source);
+	kmem_cache_free(lsm_ema_cache, ema);
+}
+
+void lsm_init_ema(struct lsm_ema *ema, size_t start, size_t end,
+		  struct file *source)
+{
+	INIT_LIST_HEAD(&ema->link);
+	ema->start = start;
+	ema->end = end;
+	if (!lsm_ema_cache_decisions && source)
+		ema->source = get_file(source);
+}
+
+int lsm_merge_ema(struct lsm_ema *p, struct lsm_ema_map *map)
+{
+	struct lsm_ema *prev = list_prev_entry(p, link);
+
+	BUG_ON(!mutex_is_locked(&map->lock));
+
+	if (&prev->link == &map->list || prev->end != p->start ||
+	    prev->source != p->source ||
+	    memcmp(prev + 1, p + 1, lsm_ema_data_size))
+		return 0;
+
+	p->start = prev->start;
+	fput(prev->source);
+	lsm_free_ema(prev);
+	return 1;
+}
+
+struct lsm_ema *lsm_split_ema(struct lsm_ema *p, size_t at,
+			      struct lsm_ema_map *map)
+{
+	struct lsm_ema *n;
+
+	BUG_ON(!mutex_is_locked(&map->lock));
+
+	if (at <= p->start || at >= p->end)
+		return p;
+
+	n = lsm_alloc_ema();
+	if (likely(n)) {
+		lsm_init_ema(n, p->start, at, p->source);
+		memcpy(n + 1, p + 1, lsm_ema_data_size);
+		p->start = at;
+		list_add_tail(&n->link, &p->link);
+	}
+	return n;
+}
+
+static int __init set_ema_cache_decisions(char *str)
+{
+	lsm_ema_cache_decisions = (strcmp(str, "0") && strcmp(str, "off"));
+	return 1;
+}
+__setup("lsm.ema.cache_decisions=", set_ema_cache_decisions);
diff --git a/security/security.c b/security/security.c
index f493db0bf62a..d50883f18be2 100644
--- a/security/security.c
+++ b/security/security.c
@@ -17,6 +17,7 @@ 
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/lsm_hooks.h>
+#include <linux/lsm_ema.h>
 #include <linux/integrity.h>
 #include <linux/ima.h>
 #include <linux/evm.h>
@@ -41,7 +42,9 @@  static struct kmem_cache *lsm_file_cache;
 static struct kmem_cache *lsm_inode_cache;
 
 char *lsm_names;
-static struct lsm_blob_sizes blob_sizes __lsm_ro_after_init;
+static struct lsm_blob_sizes blob_sizes __lsm_ro_after_init = {
+	.lbs_file = sizeof(atomic_long_t) * IS_ENABLED(CONFIG_INTEL_SGX),
+};
 
 /* Boot-time LSM user choice */
 static __initdata const char *chosen_lsm_order;
@@ -169,6 +172,7 @@  static void __init lsm_set_blob_sizes(struct lsm_blob_sizes *needed)
 	lsm_set_blob_size(&needed->lbs_ipc, &blob_sizes.lbs_ipc);
 	lsm_set_blob_size(&needed->lbs_msg_msg, &blob_sizes.lbs_msg_msg);
 	lsm_set_blob_size(&needed->lbs_task, &blob_sizes.lbs_task);
+	lsm_set_blob_size(&needed->lbs_ema_data, &blob_sizes.lbs_ema_data);
 }
 
 /* Prepare LSM for initialization. */
@@ -314,6 +318,7 @@  static void __init ordered_lsm_init(void)
 		lsm_inode_cache = kmem_cache_create("lsm_inode_cache",
 						    blob_sizes.lbs_inode, 0,
 						    SLAB_PANIC, NULL);
+	lsm_ema_global_init(blob_sizes.lbs_ema_data);
 
 	lsm_early_cred((struct cred *) current->cred);
 	lsm_early_task(current);
@@ -1357,6 +1362,7 @@  void security_file_free(struct file *file)
 	blob = file->f_security;
 	if (blob) {
 		file->f_security = NULL;
+		lsm_free_ema_map(blob);
 		kmem_cache_free(lsm_file_cache, blob);
 	}
 }
@@ -1420,6 +1426,7 @@  int security_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
 {
 	return call_int_hook(file_mprotect, 0, vma, reqprot, prot);
 }
+EXPORT_SYMBOL(security_file_mprotect);
 
 int security_file_lock(struct file *file, unsigned int cmd)
 {
@@ -2355,3 +2362,41 @@  void security_bpf_prog_free(struct bpf_prog_aux *aux)
 	call_void_hook(bpf_prog_free_security, aux);
 }
 #endif /* CONFIG_BPF_SYSCALL */
+
+#ifdef CONFIG_INTEL_SGX
+int security_enclave_load(struct file *encl, size_t start, size_t end,
+			  size_t flags, struct vm_area_struct *src)
+{
+	struct lsm_ema_map *map;
+	struct lsm_ema *ema;
+	int rc;
+
+	map = lsm_init_or_get_ema_map(encl->f_security);
+	if (unlikely(!map))
+		return -ENOMEM;
+
+	ema = lsm_alloc_ema();
+	if (unlikely(!ema))
+		return -ENOMEM;
+
+	lsm_init_ema(ema, start, end, src->vm_file);
+	rc = call_int_hook(enclave_load, 0, encl, ema, flags, src);
+	if (!rc)
+		rc = lsm_lock_ema(map);
+	if (!rc) {
+		rc = lsm_insert_ema(map, ema);
+		lsm_unlock_ema(map);
+	}
+	if (rc)
+		lsm_free_ema(ema);
+	return rc;
+}
+EXPORT_SYMBOL(security_enclave_load);
+
+int security_enclave_init(struct file *encl, struct sgx_sigstruct *sigstruct,
+			  struct vm_area_struct *src)
+{
+	return call_int_hook(enclave_init, 0, encl, sigstruct, src);
+}
+EXPORT_SYMBOL(security_enclave_init);
+#endif /* CONFIG_INTEL_SGX */