diff mbox series

[for_v22,v2,8/8] x86/sgx: Pass userspace source address directly to EADD

Message ID 20190813011252.4121-9-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series x86/sgx: Remove EADD worker and page copy | expand

Commit Message

Sean Christopherson Aug. 13, 2019, 1:12 a.m. UTC
Invoke EADD with the userspace source address instead of first copying
the data to a kernel page to avoid the overhead of alloc_page() and
copy_from_user().

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/driver/ioctl.c | 148 ++++++-------------------
 1 file changed, 33 insertions(+), 115 deletions(-)

Comments

Jarkko Sakkinen Aug. 22, 2019, 2:37 p.m. UTC | #1
On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> Invoke EADD with the userspace source address instead of first copying
> the data to a kernel page to avoid the overhead of alloc_page() and
> copy_from_user().
> 
> Suggested-by: Andy Lutomirski <luto@kernel.org>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>

NAK because takes away TCS validation and the commit message
does not give any reasoning for doing that.

Other patches have been squashed.

/Jarkko
Sean Christopherson Aug. 22, 2019, 2:50 p.m. UTC | #2
On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > Invoke EADD with the userspace source address instead of first copying
> > the data to a kernel page to avoid the overhead of alloc_page() and
> > copy_from_user().
> > 
> > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> 
> NAK because takes away TCS validation and the commit message
> does not give any reasoning for doing that.

Doh, I have a thorough explanation, but apparently it never made it from
my head to the changelog.  I'll send v2 as a standalone patch.
Jarkko Sakkinen Aug. 22, 2019, 5 p.m. UTC | #3
On Thu, 2019-08-22 at 07:50 -0700, Sean Christopherson wrote:
> On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> > On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > > Invoke EADD with the userspace source address instead of first copying
> > > the data to a kernel page to avoid the overhead of alloc_page() and
> > > copy_from_user().
> > > 
> > > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > 
> > NAK because takes away TCS validation and the commit message
> > does not give any reasoning for doing that.
> 
> Doh, I have a thorough explanation, but apparently it never made it from
> my head to the changelog.  I'll send v2 as a standalone patch.

Yeah, w/o explanation I won't just take away functionality :-)

/Jarkko
Jarkko Sakkinen Aug. 23, 2019, 1:25 a.m. UTC | #4
On Thu, Aug 22, 2019 at 08:00:15PM +0300, Jarkko Sakkinen wrote:
> On Thu, 2019-08-22 at 07:50 -0700, Sean Christopherson wrote:
> > On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> > > On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > > > Invoke EADD with the userspace source address instead of first copying
> > > > the data to a kernel page to avoid the overhead of alloc_page() and
> > > > copy_from_user().
> > > > 
> > > > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > 
> > > NAK because takes away TCS validation and the commit message
> > > does not give any reasoning for doing that.
> > 
> > Doh, I have a thorough explanation, but apparently it never made it from
> > my head to the changelog.  I'll send v2 as a standalone patch.
> 
> Yeah, w/o explanation I won't just take away functionality :-)

I came to realize that also from security perspective it might be
helpful to EADD, not from a copy of the source, but from the
actual source.

So yes, I'm for not supporting copy approach at all. I think this
viewpoint is important to note in addition to the performance
perspective.

/Jarkko
Sean Christopherson Aug. 23, 2019, 1:28 a.m. UTC | #5
On Fri, Aug 23, 2019 at 04:25:16AM +0300, Jarkko Sakkinen wrote:
> On Thu, Aug 22, 2019 at 08:00:15PM +0300, Jarkko Sakkinen wrote:
> > On Thu, 2019-08-22 at 07:50 -0700, Sean Christopherson wrote:
> > > On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> > > > On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > > > > Invoke EADD with the userspace source address instead of first copying
> > > > > the data to a kernel page to avoid the overhead of alloc_page() and
> > > > > copy_from_user().
> > > > > 
> > > > > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > > > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > > 
> > > > NAK because takes away TCS validation and the commit message
> > > > does not give any reasoning for doing that.
> > > 
> > > Doh, I have a thorough explanation, but apparently it never made it from
> > > my head to the changelog.  I'll send v2 as a standalone patch.
> > 
> > Yeah, w/o explanation I won't just take away functionality :-)
> 
> I came to realize that also from security perspective it might be
> helpful to EADD, not from a copy of the source, but from the
> actual source.
> 
> So yes, I'm for not supporting copy approach at all. I think this
> viewpoint is important to note in addition to the performance
> perspective.

Side topic, I'm getting ELDU MAC failures on master.  Are there any known
regressions, or should I start debugging?

[   18.005880] ------------[ cut here ]------------
[   18.005884] sgx: ELDU returned 9 (0x9)
[   18.005906] WARNING: CPU: 4 PID: 1142 at /home/sean/go/src/kernel.org/linux/arch/x86/kernel/cpu/sgx/encl.c:50 sgx_encl_eldu+0x363/0x390
[   18.005907] Modules linked in:
[   18.005913] CPU: 4 PID: 1142 Comm: lsdt Tainted: G        W         5.3.0-rc3+ #725
[   18.005914] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[   18.005916] RIP: 0010:sgx_encl_eldu+0x363/0x390
[   18.005917] Code: 75 11 f7 c3 00 00 00 40 ba f2 ff ff ff 0f 85 d4 fe ff ff 89 d9 89 da 48 c7 c6 4e af db 81 48 c7 c7 53 af db 81 e8 8d 1f 03 00 <0f> 0b ba f2 ff ff ff e9 b1 fe ff ff 31 c0 48 c7 c7 40 10 53 82 e9
[   18.005917] RSP: 0000:ffffc90000703d00 EFLAGS: 00010286
[   18.005918] RAX: 0000000000000000 RBX: 0000000000000009 RCX: 0000000000000006
[   18.005918] RDX: 0000000000000007 RSI: 0000000000000082 RDI: ffff88846f816560
[   18.005919] RBP: ffffc90000703d90 R08: 00000000000002da R09: 0000000000000004
[   18.005919] R10: 0000000000000000 R11: 0000000000000001 R12: ffff88846d08a100
[   18.005920] R13: ffffea00118daa40 R14: ffff888467d3e740 R15: ffffea001189c680
[   18.005920] FS:  00007fa90afff700(0000) GS:ffff88846f800000(0000) knlGS:0000000000000000
[   18.005922] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   18.005923] CR2: 00007fa8e3001000 CR3: 0000000466389002 CR4: 0000000000360ee0
[   18.005923] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   18.005923] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   18.005924] Call Trace:
[   18.005934]  ? sgx_encl_load_page.part.15+0x56/0x90
[   18.005935]  sgx_encl_load_page.part.15+0x56/0x90
[   18.005936]  sgx_vma_fault+0x84/0xf0
[   18.005941]  __do_fault+0x4f/0x87
[   18.005943]  __handle_mm_fault+0xa65/0x1020
[   18.005945]  handle_mm_fault+0xb0/0x1f0
[   18.005946]  __do_page_fault+0x231/0x4d0
[   18.005951]  async_page_fault+0x2f/0x40
[   18.005955] RIP: 0033:0x7ffe7bd62a39
[   18.005955] Code: 74 05 c1 e8 0c 89 06 31 c0 5d c3 90 90 90 90 90 90 55 48 89 e5 83 f8 02 72 67 83 f8 03 77 62 48 8b 5d 10 48 8d 0d 00 00 00 00 <0f> 01 d7 31 db 48 8b 4d 18 e3 10 89 01 73 0c 66 89 79 04 66 89 71
[   18.005956] RSP: 002b:00007fa90affd8c0 EFLAGS: 00000202
[   18.005956] RAX: 0000000000000003 RBX: 00007fa8e3a10000 RCX: 00007ffe7bd62a39
[   18.005957] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[   18.005957] RBP: 00007fa90affd8c0 R08: 0000000000000000 R09: 0000000000000000
[   18.005957] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[   18.005958] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[   18.005959] ---[ end trace c120e55e5ad35ff0 ]---
Sean Christopherson Aug. 23, 2019, 1:34 a.m. UTC | #6
On Thu, Aug 22, 2019 at 06:28:39PM -0700, Sean Christopherson wrote:
> On Fri, Aug 23, 2019 at 04:25:16AM +0300, Jarkko Sakkinen wrote:
> > On Thu, Aug 22, 2019 at 08:00:15PM +0300, Jarkko Sakkinen wrote:
> > > On Thu, 2019-08-22 at 07:50 -0700, Sean Christopherson wrote:
> > > > On Thu, Aug 22, 2019 at 05:37:18PM +0300, Jarkko Sakkinen wrote:
> > > > > On Mon, 2019-08-12 at 18:12 -0700, Sean Christopherson wrote:
> > > > > > Invoke EADD with the userspace source address instead of first copying
> > > > > > the data to a kernel page to avoid the overhead of alloc_page() and
> > > > > > copy_from_user().
> > > > > > 
> > > > > > Suggested-by: Andy Lutomirski <luto@kernel.org>
> > > > > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > > > > 
> > > > > NAK because takes away TCS validation and the commit message
> > > > > does not give any reasoning for doing that.
> > > > 
> > > > Doh, I have a thorough explanation, but apparently it never made it from
> > > > my head to the changelog.  I'll send v2 as a standalone patch.
> > > 
> > > Yeah, w/o explanation I won't just take away functionality :-)
> > 
> > I came to realize that also from security perspective it might be
> > helpful to EADD, not from a copy of the source, but from the
> > actual source.
> > 
> > So yes, I'm for not supporting copy approach at all. I think this
> > viewpoint is important to note in addition to the performance
> > perspective.
> 
> Side topic, I'm getting ELDU MAC failures on master.  Are there any known
> regressions, or should I start debugging?

Never mind, think I spotted the issue.  Hooray for git diff on branches.
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/sgx/driver/ioctl.c b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
index 840376cf352f..a55a138826d5 100644
--- a/arch/x86/kernel/cpu/sgx/driver/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/driver/ioctl.c
@@ -302,71 +302,46 @@  static int sgx_validate_secinfo(struct sgx_secinfo *secinfo)
 	return 0;
 }
 
-static bool sgx_validate_offset(struct sgx_encl *encl, unsigned long offset)
-{
-	if (offset & (PAGE_SIZE - 1))
-		return false;
-
-	if (offset >= encl->size)
-		return false;
-
-	return true;
-}
-
-static int sgx_validate_tcs(struct sgx_encl *encl, struct sgx_tcs *tcs)
-{
-	int i;
-
-	if (tcs->flags & SGX_TCS_RESERVED_MASK)
-		return -EINVAL;
-
-	if (tcs->flags & SGX_TCS_DBGOPTIN)
-		return -EINVAL;
-
-	if (!sgx_validate_offset(encl, tcs->ssa_offset))
-		return -EINVAL;
-
-	if (!sgx_validate_offset(encl, tcs->fs_offset))
-		return -EINVAL;
-
-	if (!sgx_validate_offset(encl, tcs->gs_offset))
-		return -EINVAL;
-
-	if ((tcs->fs_limit & 0xFFF) != 0xFFF)
-		return -EINVAL;
-
-	if ((tcs->gs_limit & 0xFFF) != 0xFFF)
-		return -EINVAL;
-
-	for (i = 0; i < SGX_TCS_RESERVED_SIZE; i++)
-		if (tcs->reserved[i])
-			return -EINVAL;
-
-	return 0;
-}
-
 static int __sgx_encl_add_page(struct sgx_encl *encl,
 			       struct sgx_encl_page *encl_page,
 			       struct sgx_epc_page *epc_page,
-			       void *data,
-			       struct sgx_secinfo *secinfo,
-			       unsigned long mrmask)
+			       struct sgx_secinfo *secinfo, unsigned long src,
+			       unsigned long prot, unsigned long mrmask)
 {
 	struct sgx_pageinfo pginfo;
+	struct vm_area_struct *vma;
 	int ret;
 	int i;
 
 	pginfo.secs = (unsigned long)sgx_epc_addr(encl->secs.epc_page);
 	pginfo.addr = SGX_ENCL_PAGE_ADDR(encl_page);
 	pginfo.metadata = (unsigned long)secinfo;
-	pginfo.contents = (unsigned long)data;
+	pginfo.contents = src;
 
+	down_read(&current->mm->mmap_sem);
+
+	/* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */
+	if (encl_page->vm_prot_bits & VM_EXEC) {
+		vma = find_vma(current->mm, src);
+		if (!vma) {
+			up_read(&current->mm->mmap_sem);
+			return -EFAULT;
+		}
+
+		if (!(vma->vm_flags & VM_MAYEXEC)) {
+			up_read(&current->mm->mmap_sem);
+			return -EACCES;
+		}
+	}
+
+	__uaccess_begin();
 	ret = __eadd(&pginfo, sgx_epc_addr(epc_page));
-	if (ret) {
-		if (encls_failed(ret))
-			ENCLS_WARN(ret, "EADD");
+	__uaccess_end();
+
+	up_read(&current->mm->mmap_sem);
+
+	if (ret)
 		return -EFAULT;
-	}
 
 	for_each_set_bit(i, &mrmask, 16) {
 		ret = __eextend(sgx_epc_addr(encl->secs.epc_page),
@@ -386,9 +361,9 @@  static int __sgx_encl_add_page(struct sgx_encl *encl,
 	return 0;
 }
 
-static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
-			     void *data, struct sgx_secinfo *secinfo,
-			     unsigned int mrmask, unsigned long prot)
+static int sgx_encl_add_page(struct sgx_encl *encl,
+			     struct sgx_enclave_add_page *addp,
+			     struct sgx_secinfo *secinfo, unsigned long prot)
 {
 	u64 page_type = secinfo->flags & SGX_SECINFO_PAGE_TYPE_MASK;
 	struct sgx_encl_page *encl_page;
@@ -396,13 +371,7 @@  static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	struct sgx_va_page *va_page;
 	int ret;
 
-	if (page_type == SGX_SECINFO_TCS) {
-		ret = sgx_validate_tcs(encl, data);
-		if (ret)
-			return ret;
-	}
-
-	encl_page = sgx_encl_page_alloc(encl, addr, prot, page_type);
+	encl_page = sgx_encl_page_alloc(encl, addp->addr, prot, page_type);
 	if (IS_ERR(encl_page))
 		return PTR_ERR(encl_page);
 
@@ -425,8 +394,8 @@  static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	if (ret)
 		goto err_out_shrink;
 
-	ret = __sgx_encl_add_page(encl, encl_page, epc_page, data, secinfo,
-				  mrmask);
+	ret = __sgx_encl_add_page(encl, encl_page, epc_page, secinfo,
+				  addp->src, prot, addp->mrmask);
 	if (ret)
 		goto err_out;
 
@@ -447,36 +416,6 @@  static int sgx_encl_add_page(struct sgx_encl *encl, unsigned long addr,
 	return ret;
 }
 
-static int sgx_encl_page_import_user(void *dst, unsigned long src,
-				     unsigned long prot)
-{
-	struct vm_area_struct *vma;
-	int ret = 0;
-
-	down_read(&current->mm->mmap_sem);
-
-	/* Query vma's VM_MAYEXEC as an indirect path_noexec() check. */
-	if (prot & PROT_EXEC) {
-		vma = find_vma(current->mm, src);
-		if (!vma) {
-			ret = -EFAULT;
-			goto out;
-		}
-
-		if (!(vma->vm_flags & VM_MAYEXEC)) {
-			ret = -EACCES;
-			goto out;
-		}
-	}
-
-	if (copy_from_user(dst, (void __user *)src, PAGE_SIZE))
-		ret = -EFAULT;
-
-out:
-	up_read(&current->mm->mmap_sem);
-	return ret;
-}
-
 /**
  * sgx_ioc_enclave_add_page - handler for %SGX_IOC_ENCLAVE_ADD_PAGE
  *
@@ -498,10 +437,7 @@  static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	struct sgx_encl *encl = filep->private_data;
 	struct sgx_enclave_add_page addp;
 	struct sgx_secinfo secinfo;
-	struct page *data_page;
 	unsigned long prot;
-	void *data;
-	int ret;
 
 	if (!(encl->flags & SGX_ENCL_CREATED))
 		return -EINVAL;
@@ -523,12 +459,6 @@  static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	if (sgx_validate_secinfo(&secinfo))
 		return -EINVAL;
 
-	data_page = alloc_page(GFP_HIGHUSER);
-	if (!data_page)
-		return -ENOMEM;
-
-	data = kmap(data_page);
-
 	prot = _calc_vm_trans(secinfo.flags, SGX_SECINFO_R, PROT_READ)  |
 	       _calc_vm_trans(secinfo.flags, SGX_SECINFO_W, PROT_WRITE) |
 	       _calc_vm_trans(secinfo.flags, SGX_SECINFO_X, PROT_EXEC);
@@ -537,19 +467,7 @@  static long sgx_ioc_enclave_add_page(struct file *filep, void __user *arg)
 	if ((secinfo.flags & SGX_SECINFO_PAGE_TYPE_MASK) == SGX_SECINFO_TCS)
 		prot |= PROT_READ | PROT_WRITE;
 
-	ret = sgx_encl_page_import_user(data, addp.src, prot);
-	if (ret)
-		goto out;
-
-	ret = sgx_encl_add_page(encl, addp.addr, data, &secinfo, addp.mrmask,
-				prot);
-	if (ret)
-		goto out;
-
-out:
-	kunmap(data_page);
-	__free_page(data_page);
-	return ret;
+	return sgx_encl_add_page(encl, &addp, &secinfo, prot);
 }
 
 static int __sgx_get_key_hash(struct crypto_shash *tfm, const void *modulus,