diff mbox series

[v2,2/5] x86/sgx: Reject concurrent ioctls on single enclave

Message ID 20190827192717.27312-3-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series x86/sgx: Fix lock ordering bug w/ EADD | expand

Commit Message

Sean Christopherson Aug. 27, 2019, 7:27 p.m. UTC
Except for ENCLAVE_SET_ATTRIBUTE, all SGX ioctls() must be executed
serially to successfully initialize an enclave, e.g. the kernel already
strictly requires ECREATE->EADD->EINIT, and concurrent EADDs will result
in an unstable MRENCLAVE.  Explicitly enforce serialization by returning
EINVAL if userspace attempts an ioctl while a different ioctl for the
same enclave is in progress.

The primary beneficiary of explicit serialization is sgx_encl_grow() as
it no longer has to deal with dropping and reacquiring encl->lock when
a new VA page needs to be allocated.  Eliminating the lock juggling in
sgx_encl_grow() paves the way for fixing a lock ordering bug in
ENCLAVE_ADD_PAGE without having to also juggle mm->mmap_sem.

Serializing ioctls also fixes a race between ENCLAVE_CREATE and
ENCLAVE_SET_ATTRIBUTE, as the latter does not take encl->lock, e.g.
concurrent updates to allowed_attributes could result in a stale
value.  The race could also be fixed by taking encl->lock or making
allowed_attributes atomic, but both add unnecessary overhead with this
change applied.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.h  |  1 +
 arch/x86/kernel/cpu/sgx/ioctl.c | 91 +++++++++++++++------------------
 2 files changed, 42 insertions(+), 50 deletions(-)

Comments

Jarkko Sakkinen Aug. 29, 2019, 1:15 p.m. UTC | #1
On Tue, Aug 27, 2019 at 12:27:14PM -0700, Sean Christopherson wrote:
> Except for ENCLAVE_SET_ATTRIBUTE, all SGX ioctls() must be executed
> serially to successfully initialize an enclave, e.g. the kernel already
> strictly requires ECREATE->EADD->EINIT, and concurrent EADDs will result
> in an unstable MRENCLAVE.  Explicitly enforce serialization by returning
> EINVAL if userspace attempts an ioctl while a different ioctl for the
> same enclave is in progress.
> 
> The primary beneficiary of explicit serialization is sgx_encl_grow() as
> it no longer has to deal with dropping and reacquiring encl->lock when
> a new VA page needs to be allocated.  Eliminating the lock juggling in
> sgx_encl_grow() paves the way for fixing a lock ordering bug in
> ENCLAVE_ADD_PAGE without having to also juggle mm->mmap_sem.
> 
> Serializing ioctls also fixes a race between ENCLAVE_CREATE and
> ENCLAVE_SET_ATTRIBUTE, as the latter does not take encl->lock, e.g.
> concurrent updates to allowed_attributes could result in a stale
> value.  The race could also be fixed by taking encl->lock or making
> allowed_attributes atomic, but both add unnecessary overhead with this
> change applied.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

#PF handler should be good as it has this conditional:

flags = atomic_read(&encl->flags);

if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
	return ERR_PTR(-EFAULT);

What about the reclaimer?

/Jarkko
Sean Christopherson Aug. 29, 2019, 4 p.m. UTC | #2
On Thu, Aug 29, 2019 at 04:15:43PM +0300, Jarkko Sakkinen wrote:
> On Tue, Aug 27, 2019 at 12:27:14PM -0700, Sean Christopherson wrote:
> > Except for ENCLAVE_SET_ATTRIBUTE, all SGX ioctls() must be executed
> > serially to successfully initialize an enclave, e.g. the kernel already
> > strictly requires ECREATE->EADD->EINIT, and concurrent EADDs will result
> > in an unstable MRENCLAVE.  Explicitly enforce serialization by returning
> > EINVAL if userspace attempts an ioctl while a different ioctl for the
> > same enclave is in progress.
> > 
> > The primary beneficiary of explicit serialization is sgx_encl_grow() as
> > it no longer has to deal with dropping and reacquiring encl->lock when
> > a new VA page needs to be allocated.  Eliminating the lock juggling in
> > sgx_encl_grow() paves the way for fixing a lock ordering bug in
> > ENCLAVE_ADD_PAGE without having to also juggle mm->mmap_sem.
> > 
> > Serializing ioctls also fixes a race between ENCLAVE_CREATE and
> > ENCLAVE_SET_ATTRIBUTE, as the latter does not take encl->lock, e.g.
> > concurrent updates to allowed_attributes could result in a stale
> > value.  The race could also be fixed by taking encl->lock or making
> > allowed_attributes atomic, but both add unnecessary overhead with this
> > change applied.
> > 
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> #PF handler should be good as it has this conditional:
> 
> flags = atomic_read(&encl->flags);
> 
> if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
> 	return ERR_PTR(-EFAULT);
> 
> What about the reclaimer?

Can you elaborate?  I'm not sure what you're asking.
Jarkko Sakkinen Aug. 29, 2019, 6:14 p.m. UTC | #3
On Thu, Aug 29, 2019 at 09:00:01AM -0700, Sean Christopherson wrote:
> > #PF handler should be good as it has this conditional:
> > 
> > flags = atomic_read(&encl->flags);
> > 
> > if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
> > 	return ERR_PTR(-EFAULT);
> > 
> > What about the reclaimer?
> 
> Can you elaborate?  I'm not sure what you're asking.

I'm thinking of a race between list_add() in the ioctl and
list_move_tail() in the reclaimer.

A quick way to fix this would be move sgx_alloc_va_page() from
sgx_encl_grow() and return NULL if a new allocation is required.

In the ioctl you can then allocate the page before taking locks and
do "list_add(&va_page->list, &encl->va_pages);" behind the locks.

/Jarkko
Sean Christopherson Aug. 29, 2019, 6:43 p.m. UTC | #4
On Thu, Aug 29, 2019 at 09:14:38PM +0300, Jarkko Sakkinen wrote:
> On Thu, Aug 29, 2019 at 09:00:01AM -0700, Sean Christopherson wrote:
> > > #PF handler should be good as it has this conditional:
> > > 
> > > flags = atomic_read(&encl->flags);
> > > 
> > > if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
> > > 	return ERR_PTR(-EFAULT);
> > > 
> > > What about the reclaimer?
> > 
> > Can you elaborate?  I'm not sure what you're asking.
> 
> I'm thinking of a race between list_add() in the ioctl and
> list_move_tail() in the reclaimer.

Ah crud, I forgot that the reclaimer can manipulate the list of VA pages,
I was thinking they were invisible to the reclaimer.

> A quick way to fix this would be move sgx_alloc_va_page() from
> sgx_encl_grow() and return NULL if a new allocation is required.

We don't even need to do that, moving the list_add() from sgx_encl_grow()
to its caller would be sufficient.  Same concept, but the allocation would
be handled by sgx_encl_grow() instead of having to duplicate that code in
sgx_encl_add_page() and sgx_encl_create().

> In the ioctl you can then allocate the page before taking locks and
> do "list_add(&va_page->list, &encl->va_pages);" behind the locks.
Jarkko Sakkinen Aug. 29, 2019, 10:22 p.m. UTC | #5
On Thu, Aug 29, 2019 at 11:43:33AM -0700, Sean Christopherson wrote:
> On Thu, Aug 29, 2019 at 09:14:38PM +0300, Jarkko Sakkinen wrote:
> > On Thu, Aug 29, 2019 at 09:00:01AM -0700, Sean Christopherson wrote:
> > > > #PF handler should be good as it has this conditional:
> > > > 
> > > > flags = atomic_read(&encl->flags);
> > > > 
> > > > if ((flags & SGX_ENCL_DEAD) || !(flags & SGX_ENCL_INITIALIZED))
> > > > 	return ERR_PTR(-EFAULT);
> > > > 
> > > > What about the reclaimer?
> > > 
> > > Can you elaborate?  I'm not sure what you're asking.
> > 
> > I'm thinking of a race between list_add() in the ioctl and
> > list_move_tail() in the reclaimer.
> 
> Ah crud, I forgot that the reclaimer can manipulate the list of VA pages,
> I was thinking they were invisible to the reclaimer.
> 
> > A quick way to fix this would be move sgx_alloc_va_page() from
> > sgx_encl_grow() and return NULL if a new allocation is required.
> 
> We don't even need to do that, moving the list_add() from sgx_encl_grow()
> to its caller would be sufficient.  Same concept, but the allocation would
> be handled by sgx_encl_grow() instead of having to duplicate that code in
> sgx_encl_add_page() and sgx_encl_create().

Yeah, that was whole point that list_add() cannot be done without
encl->lock. Anything that works goes...

/Jarkko
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index ffc09e1a0fc6..c7608964d69e 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -56,6 +56,7 @@  enum sgx_encl_flags {
 	SGX_ENCL_INITIALIZED	= BIT(1),
 	SGX_ENCL_DEBUG		= BIT(2),
 	SGX_ENCL_DEAD		= BIT(3),
+	SGX_ENCL_IOCTL		= BIT(4),
 };
 
 struct sgx_encl_mm {
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index d05cf051ca45..1ce8e3021a5c 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -14,8 +14,7 @@ 
 #include <linux/suspend.h>
 #include "driver.h"
 
-static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl,
-					 unsigned int disallowed_flags)
+static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl)
 {
 	struct sgx_va_page *va_page = NULL;
 	void *err;
@@ -23,36 +22,20 @@  static struct sgx_va_page *sgx_encl_grow(struct sgx_encl *encl,
 	BUILD_BUG_ON(SGX_VA_SLOT_COUNT !=
 		(SGX_ENCL_PAGE_VA_OFFSET_MASK >> 3) + 1);
 
-	if (atomic_read(&encl->flags) & disallowed_flags)
-		return ERR_PTR(-EFAULT);
-
 	if (!(encl->page_cnt % SGX_VA_SLOT_COUNT)) {
-		mutex_unlock(&encl->lock);
-
 		va_page = kzalloc(sizeof(*va_page), GFP_KERNEL);
-		if (!va_page) {
-			mutex_lock(&encl->lock);
+		if (!va_page)
 			return ERR_PTR(-ENOMEM);
-		}
 
 		va_page->epc_page = sgx_alloc_va_page();
-		mutex_lock(&encl->lock);
-
 		if (IS_ERR(va_page->epc_page)) {
 			err = ERR_CAST(va_page->epc_page);
 			kfree(va_page);
 			return err;
-		} else if (atomic_read(&encl->flags) & disallowed_flags) {
-			sgx_free_page(va_page->epc_page);
-			kfree(va_page);
-			return ERR_PTR(-EFAULT);
-		} else if (encl->page_cnt % SGX_VA_SLOT_COUNT) {
-			sgx_free_page(va_page->epc_page);
-			kfree(va_page);
-			va_page = NULL;
-		} else {
-			list_add(&va_page->list, &encl->va_pages);
 		}
+
+		WARN_ON_ONCE(encl->page_cnt % SGX_VA_SLOT_COUNT);
+		list_add(&va_page->list, &encl->va_pages);
 	}
 	encl->page_cnt++;
 	return va_page;
@@ -174,13 +157,12 @@  static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 	struct file *backing;
 	long ret;
 
-	mutex_lock(&encl->lock);
+	if (atomic_read(&encl->flags) & SGX_ENCL_CREATED)
+		return -EINVAL;
 
-	va_page = sgx_encl_grow(encl, SGX_ENCL_CREATED | SGX_ENCL_DEAD);
-	if (IS_ERR(va_page)) {
-		ret = PTR_ERR(va_page);
-		goto err_out_unlock;
-	}
+	va_page = sgx_encl_grow(encl);
+	if (IS_ERR(va_page))
+		return PTR_ERR(va_page);
 
 	ssaframesize = sgx_calc_ssaframesize(secs->miscselect, secs->xfrm);
 	if (sgx_validate_secs(secs, ssaframesize)) {
@@ -230,12 +212,11 @@  static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 
 	/*
 	 * Set SGX_ENCL_CREATED only after the enclave is fully prepped.  This
-	 * allows other flows to check if the enclave has been created without
-	 * taking encl->lock.
+	 * allows setting and checking enclave creation without having to take
+	 * encl->lock.
 	 */
 	atomic_or(SGX_ENCL_CREATED, &encl->flags);
 
-	mutex_unlock(&encl->lock);
 	return 0;
 
 err_out:
@@ -249,8 +230,6 @@  static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
 err_out_shrink:
 	sgx_encl_shrink(encl, va_page);
 
-err_out_unlock:
-	mutex_unlock(&encl->lock);
 	return ret;
 }
 
@@ -270,9 +249,8 @@  static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
  *   0 on success,
  *   -errno otherwise
  */
-static long sgx_ioc_enclave_create(struct file *filep, void __user *arg)
+static long sgx_ioc_enclave_create(struct sgx_encl *encl, void __user *arg)
 {
-	struct sgx_encl *encl = filep->private_data;
 	struct sgx_enclave_create ecreate;
 	struct page *secs_page;
 	struct sgx_secs *secs;
@@ -404,14 +382,14 @@  static int sgx_encl_add_page(struct sgx_encl *encl,
 		return PTR_ERR(epc_page);
 	}
 
-	mutex_lock(&encl->lock);
-
-	va_page = sgx_encl_grow(encl, SGX_ENCL_INITIALIZED | SGX_ENCL_DEAD);
+	va_page = sgx_encl_grow(encl);
 	if (IS_ERR(va_page)) {
 		ret = PTR_ERR(va_page);
 		goto err_out_free;
 	}
 
+	mutex_lock(&encl->lock);
+
 	ret = radix_tree_insert(&encl->page_tree, PFN_DOWN(encl_page->desc),
 				encl_page);
 	if (ret)
@@ -430,13 +408,13 @@  static int sgx_encl_add_page(struct sgx_encl *encl,
 			  PFN_DOWN(encl_page->desc));
 
 err_out_shrink:
+	mutex_unlock(&encl->lock);
 	sgx_encl_shrink(encl, va_page);
 
 err_out_free:
 	sgx_free_page(epc_page);
 	kfree(encl_page);
 
-	mutex_unlock(&encl->lock);
 	return ret;
 }
 
@@ -472,9 +450,8 @@  static int sgx_encl_add_page(struct sgx_encl *encl,
  *   -ENOMEM if any memory allocation, including EPC, fails,
  *   -errno otherwise
  */
-static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
+static long sgx_ioc_enclave_add_page(struct sgx_encl *encl, void __user *arg)
 {
-	struct sgx_encl *encl = filep->private_data;
 	struct sgx_enclave_add_page addp;
 	struct sgx_secinfo secinfo;
 
@@ -602,9 +579,8 @@  static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct,
  *   SGX error code on EINIT failure,
  *   -errno otherwise
  */
-static long sgx_ioc_enclave_init(struct file *filep, void __user *arg)
+static long sgx_ioc_enclave_init(struct sgx_encl *encl, void __user *arg)
 {
-	struct sgx_encl *encl = filep->private_data;
 	struct sgx_einittoken *einittoken;
 	struct sgx_sigstruct *sigstruct;
 	struct sgx_enclave_init einit;
@@ -658,9 +634,9 @@  static long sgx_ioc_enclave_init(struct file *filep, void __user *arg)
  *
  * Return: 0 on success, -errno otherwise
  */
-static long sgx_ioc_enclave_set_attribute(struct file *filep, void __user *arg)
+static long sgx_ioc_enclave_set_attribute(struct sgx_encl *encl,
+					  void __user *arg)
 {
-	struct sgx_encl *encl = filep->private_data;
 	struct sgx_enclave_set_attribute params;
 	struct file *attribute_file;
 	int ret;
@@ -687,16 +663,31 @@  static long sgx_ioc_enclave_set_attribute(struct file *filep, void __user *arg)
 
 long sgx_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 {
+	struct sgx_encl *encl = filep->private_data;
+	int ret;
+
+	if (atomic_fetch_or(SGX_ENCL_IOCTL, &encl->flags) & SGX_ENCL_IOCTL)
+		return -EBUSY;
+
 	switch (cmd) {
 	case SGX_IOC_ENCLAVE_CREATE:
-		return sgx_ioc_enclave_create(filep, (void __user *)arg);
+		ret = sgx_ioc_enclave_create(encl, (void __user *)arg);
+		break;
 	case SGX_IOC_ENCLAVE_ADD_PAGE:
-		return sgx_ioc_enclave_add_page(filep, (void __user *)arg);
+		ret = sgx_ioc_enclave_add_page(encl, (void __user *)arg);
+		break;
 	case SGX_IOC_ENCLAVE_INIT:
-		return sgx_ioc_enclave_init(filep, (void __user *)arg);
+		ret = sgx_ioc_enclave_init(encl, (void __user *)arg);
+		break;
 	case SGX_IOC_ENCLAVE_SET_ATTRIBUTE:
-		return sgx_ioc_enclave_set_attribute(filep, (void __user *)arg);
+		ret = sgx_ioc_enclave_set_attribute(encl, (void __user *)arg);
+		break;
 	default:
-		return -ENOIOCTLCMD;
+		ret = -ENOIOCTLCMD;
+		break;
 	}
+
+	atomic_andnot(SGX_ENCL_IOCTL, &encl->flags);
+
+	return ret;
 }