diff mbox series

[v9,4/7] x86/sgx: Add SGX infrastructure to recover from poison

Message ID 20211011185924.374213-5-tony.luck@intel.com (mailing list archive)
State New, archived
Headers show
Series [v9,1/7] x86/sgx: Add new sgx_epc_page flag bit to mark in-use pages | expand

Commit Message

Luck, Tony Oct. 11, 2021, 6:59 p.m. UTC
Provide a recovery function sgx_memory_failure(). If the poison was
consumed synchronously then send a SIGBUS. Note that the virtual
address of the access is not included with the SIGBUS as is the case
for poison outside of SGX enclaves. This doesn't matter as addresses
of code/data inside an enclave is of little to no use to code executing
outside the (now dead) enclave.

Poison found in a free page results in the page being moved from the
free list to the poison page list.

Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Tested-by: Reinette Chatre <reinette.chatre@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
---
 arch/x86/kernel/cpu/sgx/main.c | 77 ++++++++++++++++++++++++++++++++++
 1 file changed, 77 insertions(+)

Comments

Sean Christopherson Oct. 15, 2021, 11:10 p.m. UTC | #1
On Mon, Oct 11, 2021, Tony Luck wrote:
> +	section = &sgx_epc_sections[page->section];
> +	node = section->node;
> +
> +	spin_lock(&node->lock);
> +
> +	/* Already poisoned? Nothing more to do */
> +	if (page->poison)
> +		goto out;
> +
> +	page->poison = 1;
> +
> +	/*
> +	 * If flags is zero, then the page is on a free list.
> +	 * Move it to the poison page list.
> +	 */
> +	if (!page->flags) {

If the flag is inverted, this becomes

	if (page->flags & SGX_EPC_PAGE_FREE) {

> +		list_del(&page->list);
> +		list_add(&page->list, &sgx_poison_page_list);

list_move(), and needs the same protection for sgx_poison_page_list.

> +		goto out;
> +	}
> +
> +	/*
> +	 * TBD: Add additional plumbing to enable pre-emptive
> +	 * action for asynchronous poison notification. Until
> +	 * then just hope that the poison:
> +	 * a) is not accessed - sgx_free_epc_page() will deal with it
> +	 *    when the user gives it back
> +	 * b) results in a recoverable machine check rather than
> +	 *    a fatal one
> +	 */
> +out:
> +	spin_unlock(&node->lock);
> +	return 0;
> +}
> +
>  /**
>   * A section metric is concatenated in a way that @low bits 12-31 define the
>   * bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the
> 
> -- 
> 2.31.1
>
Luck, Tony Oct. 15, 2021, 11:19 p.m. UTC | #2
On Fri, Oct 15, 2021 at 11:10:32PM +0000, Sean Christopherson wrote:
> On Mon, Oct 11, 2021, Tony Luck wrote:
> > +	section = &sgx_epc_sections[page->section];
> > +	node = section->node;
> > +
> > +	spin_lock(&node->lock);
> > +
> > +	/* Already poisoned? Nothing more to do */
> > +	if (page->poison)
> > +		goto out;
> > +
> > +	page->poison = 1;
> > +
> > +	/*
> > +	 * If flags is zero, then the page is on a free list.
> > +	 * Move it to the poison page list.
> > +	 */
> > +	if (!page->flags) {
> 
> If the flag is inverted, this becomes
> 
> 	if (page->flags & SGX_EPC_PAGE_FREE) {

I like the inversion. I'll switch to SGX_EPC_PAGE_FREE

> 
> > +		list_del(&page->list);
> > +		list_add(&page->list, &sgx_poison_page_list);
> 
> list_move(), and needs the same protection for sgx_poison_page_list.

Didn't know list_move() existed. Will change all the lis_del+list_add
into list_move.

Also change the sgx_poison_page_list from global to per-node. Then
the adds will be safe (accessed while holding the node->lock).


Thanks for the review.

-Tony
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 653bace26100..398c9749e4d1 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -682,6 +682,83 @@  bool arch_is_platform_page(u64 paddr)
 }
 EXPORT_SYMBOL_GPL(arch_is_platform_page);
 
+static struct sgx_epc_page *sgx_paddr_to_page(u64 paddr)
+{
+	struct sgx_epc_section *section;
+
+	section = xa_load(&sgx_epc_address_space, paddr);
+	if (!section)
+		return NULL;
+
+	return &section->pages[PFN_DOWN(paddr - section->phys_addr)];
+}
+
+/*
+ * Called in process context to handle a hardware reported
+ * error in an SGX EPC page.
+ * If the MF_ACTION_REQUIRED bit is set in flags, then the
+ * context is the task that consumed the poison data. Otherwise
+ * this is called from a kernel thread unrelated to the page.
+ */
+int arch_memory_failure(unsigned long pfn, int flags)
+{
+	struct sgx_epc_page *page = sgx_paddr_to_page(pfn << PAGE_SHIFT);
+	struct sgx_epc_section *section;
+	struct sgx_numa_node *node;
+
+	/*
+	 * mm/memory-failure.c calls this routine for all errors
+	 * where there isn't a "struct page" for the address. But that
+	 * includes other address ranges besides SGX.
+	 */
+	if (!page)
+		return -ENXIO;
+
+	/*
+	 * If poison was consumed synchronously. Send a SIGBUS to
+	 * the task. Hardware has already exited the SGX enclave and
+	 * will not allow re-entry to an enclave that has a memory
+	 * error. The signal may help the task understand why the
+	 * enclave is broken.
+	 */
+	if (flags & MF_ACTION_REQUIRED)
+		force_sig(SIGBUS);
+
+	section = &sgx_epc_sections[page->section];
+	node = section->node;
+
+	spin_lock(&node->lock);
+
+	/* Already poisoned? Nothing more to do */
+	if (page->poison)
+		goto out;
+
+	page->poison = 1;
+
+	/*
+	 * If flags is zero, then the page is on a free list.
+	 * Move it to the poison page list.
+	 */
+	if (!page->flags) {
+		list_del(&page->list);
+		list_add(&page->list, &sgx_poison_page_list);
+		goto out;
+	}
+
+	/*
+	 * TBD: Add additional plumbing to enable pre-emptive
+	 * action for asynchronous poison notification. Until
+	 * then just hope that the poison:
+	 * a) is not accessed - sgx_free_epc_page() will deal with it
+	 *    when the user gives it back
+	 * b) results in a recoverable machine check rather than
+	 *    a fatal one
+	 */
+out:
+	spin_unlock(&node->lock);
+	return 0;
+}
+
 /**
  * A section metric is concatenated in a way that @low bits 12-31 define the
  * bits 12-31 of the metric and @high bits 0-19 define the bits 32-51 of the