diff mbox series

[v2] x86/sgx: Fix sgx_encl_may_map locking

Message ID 20201005031954.144239-1-jarkko.sakkinen@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series [v2] x86/sgx: Fix sgx_encl_may_map locking | expand

Commit Message

Jarkko Sakkinen Oct. 5, 2020, 3:19 a.m. UTC
Fix the issue further discussed in:

1. https://lore.kernel.org/linux-sgx/op.0rwbv916wjvjmi@mqcpg7oapc828.gar.corp.intel.com/
2. https://lore.kernel.org/linux-sgx/20201003195440.GD20115@casper.infradead.org/

Use the approach suggested by Matthew, and supported by the analysis
that I wrote:

https://lore.kernel.org/linux-sgx/20201005030619.GA126283@linux.intel.com/

Reported-by: Haitao Huang <haitao.huang@linux.intel.com>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Jethro Beekman <jethro@fortanix.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
---
 arch/x86/kernel/cpu/sgx/encl.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 4c6407cd857a..2bb3ec6996e9 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -307,6 +307,7 @@  int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
 	unsigned long idx_start = PFN_DOWN(start);
 	unsigned long idx_end = PFN_DOWN(end - 1);
 	struct sgx_encl_page *page;
+	unsigned long count = 0;
 
 	XA_STATE(xas, &encl->page_array, idx_start);
 
@@ -317,10 +318,31 @@  int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
 	if (current->personality & READ_IMPLIES_EXEC)
 		return -EACCES;
 
-	xas_for_each(&xas, page, idx_end)
+	/*
+	 * No need to hold encl->lock:
+	 * 1. None of the page->* get written.
+	 * 2. page->vm_max_prot_bits is set in sgx_encl_page_alloc(). This
+	 *    is before calling xa_insert(). After that it is never modified.
+	 */
+	xas_lock(&xas);
+	xas_for_each(&xas, page, idx_end) {
+		if (++count % XA_CHECK_SCHED)
+			continue;
+
+		xas_pause(&xas);
+		xas_unlock(&xas);
+
+		/*
+		 * Attributes are not protected by the xa_lock, so I'm assuming
+		 * that this is the legit place for the check.
+		 */
 		if (!page || (~page->vm_max_prot_bits & vm_prot_bits))
 			return -EACCES;
 
+		cond_resched();
+		xas_lock(&xas);
+	}
+
 	return 0;
 }