diff mbox series

x86/sgx: Do not update sgx_nr_free_pages in sgx_setup_epc_section()

Message ID 20210408092924.7032-1-jarkko@kernel.org (mailing list archive)
State New
Headers show
Series x86/sgx: Do not update sgx_nr_free_pages in sgx_setup_epc_section() | expand

Commit Message

Jarkko Sakkinen April 8, 2021, 9:29 a.m. UTC
NUMA patches introduced this change to __sgx_sanitize_pages():

-		if (!ret)
-			list_move(&page->list, &section->page_list);
-		else
+		if (!ret) {
+			/*
+			 * page is now sanitized.  Make it available via the SGX
+			 * page allocator:
+			 */
+			list_del(&page->list);
+			sgx_free_epc_page(page);
+		} else {
+			/* The page is not yet clean - move to the dirty list. */
 			list_move_tail(&page->list, &dirty);
-
-		spin_unlock(&section->lock);
+		}

This was done for the reason that it is best to keep the logic to assign
available-for-use EPC pages to the correct NUMA lists in a single location.

The problem is that the sgx_nr_free_pages is also incremented by
sgx_free_epc_pages(), and thus it ends up having double the number of pages
available.

The count was even before NUMA patches kind of out-of-sync, i.e. free pages
count was incremented before putting them to the free list, but it didn't
matter that much, because sanitization is fairly fast and it only prevented
ksgxd to trigger small time after the system had powered on.

Fixes: 51ab30eb2ad4 ("x86/sgx: Replace section->init_laundry_list with sgx_dirty_page_list")
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
---
v2:
* Wrote more verbose and detailed description what is going on.
* Split out from the patches. This is urgent - the attributes can wait.
 arch/x86/kernel/cpu/sgx/main.c | 1 -
 1 file changed, 1 deletion(-)
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 13a7599ce7d4..7df7048cb1c9 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -657,7 +657,6 @@  static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
 		list_add_tail(&section->pages[i].list, &sgx_dirty_page_list);
 	}
 
-	sgx_nr_free_pages += nr_pages;
 	return true;
 }