From patchwork Wed Mar 3 15:03:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 12115209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB2BAC1551D for ; Thu, 4 Mar 2021 00:49:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A59C464F20 for ; Thu, 4 Mar 2021 00:49:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240209AbhCDAtU (ORCPT ); Wed, 3 Mar 2021 19:49:20 -0500 Received: from mail.kernel.org ([198.145.29.99]:42456 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447592AbhCCPE2 (ORCPT ); Wed, 3 Mar 2021 10:04:28 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id EE1F764EDB; Wed, 3 Mar 2021 15:03:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1614783827; bh=2/sTuEpFwd93bBqjR0EAjn1gZBYjn5jEVQYV6u7dFpA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PCW/5/jdJW6J1uDynL4hdlpdZ6QFqusywN2AfcLYm89+5uQyzo1J3JJH7s7K/XLkL sgUlmcuKM3USbHFNSplhnyntUhX9OlSPbqBoFaSt/KzMLuCfslGc/M6EMMBS2WY0I9 uhR+Yn64UlyVFxnxRF1BqAQiF4IFpbiKBIUIz+V4WuZs1iemvODAm2JU3HjxNEfclv MLLELCYqMwfFmiDOkEnxGakz1+9D2TXquyvR+HASwwyvGYVLA6yhRGQVaC6Qify45z apYJt1JAQfCW4sBO0+gRfWFitJi0XqhQWIm3T25ln/QYrkZLRILCwkvX3iK+D2Bn5y rW4U28ryssPqg== From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Jarkko Sakkinen , stable@vger.kernel.org, Dave Hansen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Jethro Beekman , Sean Christopherson , Serge Ayoun , linux-kernel@vger.kernel.org Subject: [PATCH v3 1/5] x86/sgx: Fix a resource leak in sgx_init() Date: Wed, 3 Mar 2021 17:03:19 +0200 Message-Id: <20210303150323.433207-2-jarkko@kernel.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210303150323.433207-1-jarkko@kernel.org> References: <20210303150323.433207-1-jarkko@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org If sgx_page_cache_init() fails in the middle, a trivial return statement causes unused memory and virtual address space reserved for the EPC section, not freed. Fix this by using the same rollback, as when sgx_page_reclaimer_init() fails. Cc: stable@vger.kernel.org # 5.11 Fixes: e7e0545299d8 ("x86/sgx: Initialize metadata for Enclave Page Cache (EPC) sections") Signed-off-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/main.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 8df81a3ed945..52d070fb4c9a 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -708,8 +708,10 @@ static int __init sgx_init(void) if (!cpu_feature_enabled(X86_FEATURE_SGX)) return -ENODEV; - if (!sgx_page_cache_init()) - return -ENOMEM; + if (!sgx_page_cache_init()) { + ret = -ENOMEM; + goto err_page_cache; + } if (!sgx_page_reclaimer_init()) { ret = -ENOMEM; From patchwork Wed Mar 3 15:03:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 12115211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D4A6C1551E for ; Thu, 4 Mar 2021 00:49:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4812A64F11 for ; Thu, 4 Mar 2021 00:49:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232549AbhCDAtV (ORCPT ); Wed, 3 Mar 2021 19:49:21 -0500 Received: from mail.kernel.org ([198.145.29.99]:42526 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447604AbhCCPEa (ORCPT ); Wed, 3 Mar 2021 10:04:30 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 9F35F64E02; Wed, 3 Mar 2021 15:03:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1614783830; bh=1dr+btPYpniJbMjRIotZWMFL1lmdGAXZ0F3sj1DRbOs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=acVIUdHV0Tsbgga2BOB3BhqjpZiKE3kBXXf+mqnqAMH6u8tI2Pq4rdEqsWbLR9rsi lJOgQS2VyT3m8lprGDQ3l9YMzZdCvpaaM0XVQqJS9Xdfas0vewBmxinbc4FoKTZ/9M h5H8XKx190w+X2e8wlt5FRiWmbikHpFjs9EvJQfrv9yteStXWQide5+4bDK9tdrqV9 81BszEb70u7i12YNPHIwbeAo8IbGO0HUk+MYcYGjakCoDk+uH+uEqpKPGUzMV9lrht 9f+Fh8r238JW4S0CnAbnFV2BE+TjYHaQ7UUEgket4HwCfx3su+AE8tDfNSAQszqAK8 t1rA1IlAyYnxw== From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Jarkko Sakkinen , Dave Hansen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org Subject: [PATCH v3 2/5] x86/sgx: Use sgx_free_epc_page() in sgx_reclaim_pages() Date: Wed, 3 Mar 2021 17:03:20 +0200 Message-Id: <20210303150323.433207-3-jarkko@kernel.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210303150323.433207-1-jarkko@kernel.org> References: <20210303150323.433207-1-jarkko@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Replace the ad-hoc code with a sgx_free_epc_page(), in order to make sure that all the relevant checks and book keeping is done, while freeing a borrowed EPC page. Signed-off-by: Jarkko Sakkinen Acked-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/main.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 52d070fb4c9a..ed99c60024dc 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -305,7 +305,6 @@ static void sgx_reclaim_pages(void) { struct sgx_epc_page *chunk[SGX_NR_TO_SCAN]; struct sgx_backing backing[SGX_NR_TO_SCAN]; - struct sgx_epc_section *section; struct sgx_encl_page *encl_page; struct sgx_epc_page *epc_page; pgoff_t page_index; @@ -378,11 +377,7 @@ static void sgx_reclaim_pages(void) kref_put(&encl_page->encl->refcount, sgx_encl_release); epc_page->flags &= ~SGX_EPC_PAGE_RECLAIMER_TRACKED; - section = &sgx_epc_sections[epc_page->section]; - spin_lock(§ion->lock); - list_add_tail(&epc_page->list, §ion->page_list); - section->free_cnt++; - spin_unlock(§ion->lock); + sgx_free_epc_page(epc_page); } } From patchwork Wed Mar 3 15:03:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 12115217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAFAFC15520 for ; Thu, 4 Mar 2021 00:49:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 94DA964F4C for ; Thu, 4 Mar 2021 00:49:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234779AbhCDAtX (ORCPT ); Wed, 3 Mar 2021 19:49:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:42594 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447610AbhCCPEh (ORCPT ); Wed, 3 Mar 2021 10:04:37 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 6BA2464EE9; Wed, 3 Mar 2021 15:03:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1614783832; bh=MO/4TL7dKn9BPVSY9M+hxMlpKkSMe6bLuTcinRADrCc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iKzhxaB5TmNpFMODWSdY/R4EV5QySIRdQSeX6UofNa7UD4WmROYsBLAU3gZovwPiq JcPj1m8HmhaCvtVj8hzC7U89azQ1rAxfFj3vWC1xqza6xe/cUCZ95ubIt8u6gHxlf7 xHSuXPpuU/NIFqWBxJv8gguEubWm3+KdS/MYLApnDHMnPD1S2VdFw7WXkHEZYSr79B fiKMqPa/rogG9PvO6IVNUwg/BluQ6TXC9nS/boL2ET27Bh6BeZXjtJkjSj5hEJjyrI SGsX/3+QJUrziqYnSO7XfAd2nRCR4ybM+KsNnWWt4L4HOixfb1EORqkT9ulpdlUoBi sM5oj+jb6DvuA== From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Jarkko Sakkinen , Dave Hansen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org Subject: [PATCH v3 3/5] x86/sgx: Replace section->init_laundry_list with a temp list Date: Wed, 3 Mar 2021 17:03:21 +0200 Message-Id: <20210303150323.433207-4-jarkko@kernel.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210303150323.433207-1-jarkko@kernel.org> References: <20210303150323.433207-1-jarkko@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Build a local laundry list in sgx_init(), and transfer its ownsership to ksgxd for sanitization, thus getting rid of useless member in struct sgx_epc_section. Signed-off-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/main.c | 64 ++++++++++++++++++---------------- arch/x86/kernel/cpu/sgx/sgx.h | 7 ---- 2 files changed, 34 insertions(+), 37 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index ed99c60024dc..a649010949c2 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -30,35 +30,33 @@ static DEFINE_SPINLOCK(sgx_reclaimer_lock); * Reset dirty EPC pages to uninitialized state. Laundry can be left with SECS * pages whose child pages blocked EREMOVE. */ -static void sgx_sanitize_section(struct sgx_epc_section *section) +static void sgx_sanitize_section(struct list_head *laundry) { struct sgx_epc_page *page; LIST_HEAD(dirty); int ret; /* init_laundry_list is thread-local, no need for a lock: */ - while (!list_empty(§ion->init_laundry_list)) { + while (!list_empty(laundry)) { if (kthread_should_stop()) return; - /* needed for access to ->page_list: */ - spin_lock(§ion->lock); - - page = list_first_entry(§ion->init_laundry_list, - struct sgx_epc_page, list); + page = list_first_entry(laundry, struct sgx_epc_page, list); ret = __eremove(sgx_get_epc_virt_addr(page)); - if (!ret) - list_move(&page->list, §ion->page_list); - else + if (!ret) { + /* The page is clean - move to the free list. */ + list_del(&page->list); + sgx_free_epc_page(page); + } else { + /* The page is not yet clean - move to the dirty list. */ list_move_tail(&page->list, &dirty); - - spin_unlock(§ion->lock); + } cond_resched(); } - list_splice(&dirty, §ion->init_laundry_list); + list_splice(&dirty, laundry); } static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page) @@ -400,6 +398,7 @@ static bool sgx_should_reclaim(unsigned long watermark) static int ksgxd(void *p) { + struct list_head *laundry = p; int i; set_freezable(); @@ -408,16 +407,13 @@ static int ksgxd(void *p) * Sanitize pages in order to recover from kexec(). The 2nd pass is * required for SECS pages, whose child pages blocked EREMOVE. */ - for (i = 0; i < sgx_nr_epc_sections; i++) - sgx_sanitize_section(&sgx_epc_sections[i]); + sgx_sanitize_section(laundry); + sgx_sanitize_section(laundry); - for (i = 0; i < sgx_nr_epc_sections; i++) { - sgx_sanitize_section(&sgx_epc_sections[i]); + if (!list_empty(laundry)) + WARN(1, "EPC section %d has unsanitized pages.\n", i); - /* Should never happen. */ - if (!list_empty(&sgx_epc_sections[i].init_laundry_list)) - WARN(1, "EPC section %d has unsanitized pages.\n", i); - } + kfree(laundry); while (!kthread_should_stop()) { if (try_to_freeze()) @@ -436,11 +432,11 @@ static int ksgxd(void *p) return 0; } -static bool __init sgx_page_reclaimer_init(void) +static bool __init sgx_page_reclaimer_init(struct list_head *laundry) { struct task_struct *tsk; - tsk = kthread_run(ksgxd, NULL, "ksgxd"); + tsk = kthread_run(ksgxd, laundry, "ksgxd"); if (IS_ERR(tsk)) return false; @@ -614,7 +610,8 @@ void sgx_free_epc_page(struct sgx_epc_page *page) static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, unsigned long index, - struct sgx_epc_section *section) + struct sgx_epc_section *section, + struct list_head *laundry) { unsigned long nr_pages = size >> PAGE_SHIFT; unsigned long i; @@ -632,13 +629,12 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, section->phys_addr = phys_addr; spin_lock_init(§ion->lock); INIT_LIST_HEAD(§ion->page_list); - INIT_LIST_HEAD(§ion->init_laundry_list); for (i = 0; i < nr_pages; i++) { section->pages[i].section = index; section->pages[i].flags = 0; section->pages[i].owner = NULL; - list_add_tail(§ion->pages[i].list, §ion->init_laundry_list); + list_add_tail(§ion->pages[i].list, laundry); } section->free_cnt = nr_pages; @@ -656,7 +652,7 @@ static inline u64 __init sgx_calc_section_metric(u64 low, u64 high) ((high & GENMASK_ULL(19, 0)) << 32); } -static bool __init sgx_page_cache_init(void) +static bool __init sgx_page_cache_init(struct list_head *laundry) { u32 eax, ebx, ecx, edx, type; u64 pa, size; @@ -679,7 +675,7 @@ static bool __init sgx_page_cache_init(void) pr_info("EPC section 0x%llx-0x%llx\n", pa, pa + size - 1); - if (!sgx_setup_epc_section(pa, size, i, &sgx_epc_sections[i])) { + if (!sgx_setup_epc_section(pa, size, i, &sgx_epc_sections[i], laundry)) { pr_err("No free memory for an EPC section\n"); break; } @@ -697,18 +693,25 @@ static bool __init sgx_page_cache_init(void) static int __init sgx_init(void) { + struct list_head *laundry; int ret; int i; if (!cpu_feature_enabled(X86_FEATURE_SGX)) return -ENODEV; - if (!sgx_page_cache_init()) { + laundry = kzalloc(sizeof(*laundry), GFP_KERNEL); + if (!laundry) + return -ENOMEM; + + INIT_LIST_HEAD(laundry); + + if (!sgx_page_cache_init(laundry)) { ret = -ENOMEM; goto err_page_cache; } - if (!sgx_page_reclaimer_init()) { + if (!sgx_page_reclaimer_init(laundry)) { ret = -ENOMEM; goto err_page_cache; } @@ -728,6 +731,7 @@ static int __init sgx_init(void) memunmap(sgx_epc_sections[i].virt_addr); } + kfree(laundry); return ret; } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 5fa42d143feb..bc8af0428640 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -45,13 +45,6 @@ struct sgx_epc_section { spinlock_t lock; struct list_head page_list; unsigned long free_cnt; - - /* - * Pages which need EREMOVE run on them before they can be - * used. Only safe to be accessed in ksgxd and init code. - * Not protected by locks. - */ - struct list_head init_laundry_list; }; extern struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS]; From patchwork Wed Mar 3 15:03:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 12115213 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3C1FC468C6 for ; Thu, 4 Mar 2021 00:49:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 84C9864F20 for ; Thu, 4 Mar 2021 00:49:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240258AbhCDAtX (ORCPT ); Wed, 3 Mar 2021 19:49:23 -0500 Received: from mail.kernel.org ([198.145.29.99]:42662 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1447611AbhCCPEg (ORCPT ); Wed, 3 Mar 2021 10:04:36 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 18FBA64EBA; Wed, 3 Mar 2021 15:03:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1614783835; bh=yBmvpmeOGLU5Fa1QOt1rSOHLX/6JKqEaU3YnvsAyAi4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Fqj+3P+BrklhB1SVcdd+y8co6kKBS4uL15BR6h3Va06+eOgXWAmcb4P0U0G24bMiR /I5tAxA3ZwlShWkzbWWQw2moIyJ47YJwv7tBdkd/atn8Dh9PKdvgtmSM+htR7NuvuV u8kQh+IjLoVSjRJ7Fpw6z2WXv/Dz0Yx+Vi/qw6czYY8DsJ31GIHrW1eQmmiDVb19pH 6Je/E2qVZl/S1/6gC2mlhpqesjRzM4d4wg8nH13CYQbJ/NeUsz9R8wymbXEpEMBWcp S4GWtbYC+8JmaUmcRw/W30ISfClv50iDLkCFE1zWXAeKe2bZXKaXDy2Hbb6Q3BN/q4 FmFbt5WQqvQqA== From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Jarkko Sakkinen , Dave Hansen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , linux-kernel@vger.kernel.org Subject: [PATCH v3 4/5] x86/sgx: Replace section->page_list with a global free page list Date: Wed, 3 Mar 2021 17:03:22 +0200 Message-Id: <20210303150323.433207-5-jarkko@kernel.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210303150323.433207-1-jarkko@kernel.org> References: <20210303150323.433207-1-jarkko@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Background ========== EPC section is covered by one or more SRAT entries that are associated with one and only one PXM (NUMA node). The current implementation overheats a single NUMA node, because sgx_alloc_epc_page() always starts looking for pages from the same EPC section everytime. Only within a section it does pick pages in FIFO fashion, i.e. the oldest freed in that section is the EPC page given back to the caller. That does not do any good, as the pages in the same node are performance-wise equal. Solution ======== Replace local lists with a single global free page list, from which pages are borrowed in FIFO fashion. Signed-off-by: Jarkko Sakkinen --- arch/x86/kernel/cpu/sgx/main.c | 84 +++++++++++----------------------- arch/x86/kernel/cpu/sgx/sgx.h | 6 --- 2 files changed, 27 insertions(+), 63 deletions(-) diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index a649010949c2..58474480f5be 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -18,14 +18,15 @@ static int sgx_nr_epc_sections; static struct task_struct *ksgxd_tsk; static DECLARE_WAIT_QUEUE_HEAD(ksgxd_waitq); -/* - * These variables are part of the state of the reclaimer, and must be accessed - * with sgx_reclaimer_lock acquired. - */ +/* The reclaimer lock protected variables prepend the lock. */ static LIST_HEAD(sgx_active_page_list); - static DEFINE_SPINLOCK(sgx_reclaimer_lock); +/* The free page list lock protected variables prepend the lock. */ +static unsigned long sgx_nr_free_pages; +static LIST_HEAD(sgx_free_page_list); +static DEFINE_SPINLOCK(sgx_free_page_list_lock); + /* * Reset dirty EPC pages to uninitialized state. Laundry can be left with SECS * pages whose child pages blocked EREMOVE. @@ -379,21 +380,9 @@ static void sgx_reclaim_pages(void) } } -static unsigned long sgx_nr_free_pages(void) -{ - unsigned long cnt = 0; - int i; - - for (i = 0; i < sgx_nr_epc_sections; i++) - cnt += sgx_epc_sections[i].free_cnt; - - return cnt; -} - static bool sgx_should_reclaim(unsigned long watermark) { - return sgx_nr_free_pages() < watermark && - !list_empty(&sgx_active_page_list); + return sgx_nr_free_pages < watermark && !list_empty(&sgx_active_page_list); } static int ksgxd(void *p) @@ -445,50 +434,34 @@ static bool __init sgx_page_reclaimer_init(struct list_head *laundry) return true; } -static struct sgx_epc_page *__sgx_alloc_epc_page_from_section(struct sgx_epc_section *section) -{ - struct sgx_epc_page *page; - - spin_lock(§ion->lock); - - if (list_empty(§ion->page_list)) { - spin_unlock(§ion->lock); - return NULL; - } - - page = list_first_entry(§ion->page_list, struct sgx_epc_page, list); - list_del_init(&page->list); - section->free_cnt--; - - spin_unlock(§ion->lock); - return page; -} - /** * __sgx_alloc_epc_page() - Allocate an EPC page * - * Iterate through EPC sections and borrow a free EPC page to the caller. When a - * page is no longer needed it must be released with sgx_free_epc_page(). + * Borrow a free EPC page to the caller in FIFO fashion: the callers gets oldest + * freed page. * * Return: - * an EPC page, - * -errno on error + * - an EPC page: Free EPC pages were available. + * - ERR_PTR(-ENOMEM): Run out of EPC pages. */ struct sgx_epc_page *__sgx_alloc_epc_page(void) { - struct sgx_epc_section *section; struct sgx_epc_page *page; - int i; - for (i = 0; i < sgx_nr_epc_sections; i++) { - section = &sgx_epc_sections[i]; + spin_lock(&sgx_free_page_list_lock); - page = __sgx_alloc_epc_page_from_section(section); - if (page) - return page; + if (list_empty(&sgx_free_page_list)) { + spin_unlock(&sgx_free_page_list_lock); + return NULL; } - return ERR_PTR(-ENOMEM); + page = list_first_entry(&sgx_free_page_list, struct sgx_epc_page, list); + list_del_init(&page->list); + sgx_nr_free_pages--; + + spin_unlock(&sgx_free_page_list_lock); + + return page; } /** @@ -593,7 +566,6 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) */ void sgx_free_epc_page(struct sgx_epc_page *page) { - struct sgx_epc_section *section = &sgx_epc_sections[page->section]; int ret; WARN_ON_ONCE(page->flags & SGX_EPC_PAGE_RECLAIMER_TRACKED); @@ -602,10 +574,10 @@ void sgx_free_epc_page(struct sgx_epc_page *page) if (WARN_ONCE(ret, "EREMOVE returned %d (0x%x)", ret, ret)) return; - spin_lock(§ion->lock); - list_add_tail(&page->list, §ion->page_list); - section->free_cnt++; - spin_unlock(§ion->lock); + spin_lock(&sgx_free_page_list_lock); + list_add_tail(&page->list, &sgx_free_page_list); + sgx_nr_free_pages++; + spin_unlock(&sgx_free_page_list_lock); } static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, @@ -627,8 +599,6 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, } section->phys_addr = phys_addr; - spin_lock_init(§ion->lock); - INIT_LIST_HEAD(§ion->page_list); for (i = 0; i < nr_pages; i++) { section->pages[i].section = index; @@ -637,7 +607,7 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size, list_add_tail(§ion->pages[i].list, laundry); } - section->free_cnt = nr_pages; + sgx_nr_free_pages += nr_pages; return true; } diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index bc8af0428640..41ca045a574a 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -34,17 +34,11 @@ struct sgx_epc_page { * physical memory e.g. for memory areas of the each node. This structure is * used to store EPC pages for one EPC section and virtual memory area where * the pages have been mapped. - * - * 'lock' must be held before accessing 'page_list' or 'free_cnt'. */ struct sgx_epc_section { unsigned long phys_addr; void *virt_addr; struct sgx_epc_page *pages; - - spinlock_t lock; - struct list_head page_list; - unsigned long free_cnt; }; extern struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS]; From patchwork Wed Mar 3 15:03:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 12115215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0F73C15521 for ; Thu, 4 Mar 2021 00:49:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD58564F97 for ; Thu, 4 Mar 2021 00:49:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240354AbhCDAtY (ORCPT ); Wed, 3 Mar 2021 19:49:24 -0500 Received: from mail.kernel.org ([198.145.29.99]:43042 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377930AbhCCPFP (ORCPT ); Wed, 3 Mar 2021 10:05:15 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id CAE6664EE1; Wed, 3 Mar 2021 15:03:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1614783838; bh=1TQwL7i+C+Ykj3EZaItYeEh6Eb4hhLbKnyOdnj0pVr0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HW1H83bGkrj9yuExE+YUU/1c0gs58J8m6yLlRdQ+Lyv34iTMtK1TQuQCz1kKtNGi5 CfgKhpmrhdiABxpgtwX8XjVEqHXCMA7xYNAdyNksnGa5+2L8oPFKlUfE2bbmKSeBca JhAdBVPPHh+AtfKwqvsUi80E1urbfAWD+MGBrwulZq77b0afmUwn0Ji3Zo/eTP/FAV FqROoW9wCfa44CrHRYjQ1HC0zgREz8ZfWNneyUYF2JOBzwKn0TSznwgGc7AhEGmhVr wHN8w9IIIHZ8p6yjG+5UC2+RM+vdu/X8NK4VOxMZHumT0OMp3ZlTu6WtrA2IlxDTJm 08ABvOnlX1TWw== From: Jarkko Sakkinen To: linux-sgx@vger.kernel.org Cc: Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Dave Hansen , linux-kernel@vger.kernel.org Subject: [PATCH v3 5/5] x86/sgx: Add a basic NUMA allocation scheme to sgx_alloc_epc_page() Date: Wed, 3 Mar 2021 17:03:23 +0200 Message-Id: <20210303150323.433207-6-jarkko@kernel.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210303150323.433207-1-jarkko@kernel.org> References: <20210303150323.433207-1-jarkko@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Background ========== EPC section is covered by one or more SRAT entries that are associated with one and only one PXM (NUMA node). The motivation behind this patch is to provide basic elements of building allocation scheme based on this premise. Solution ======== Use phys_to_target_node() to associate each NUMA node with the EPC sections contained within its range. In sgx_alloc_epc_page(), first try to allocate from the NUMA node, where the CPU is executing. If that fails, fallback to the legacy allocation. Link: https://lore.kernel.org/lkml/158188326978.894464.217282995221175417.stgit@dwillia2-desk3.amr.corp.intel.com/ Signed-off-by: Jarkko Sakkinen --- arch/x86/Kconfig | 1 + arch/x86/kernel/cpu/sgx/main.c | 84 ++++++++++++++++++++++++++++++++++ arch/x86/kernel/cpu/sgx/sgx.h | 9 ++++ 3 files changed, 94 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index a5f6a3013138..7eb1e96cfe8a 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1940,6 +1940,7 @@ config X86_SGX depends on CRYPTO_SHA256=y select SRCU select MMU_NOTIFIER + select NUMA_KEEP_MEMINFO if NUMA help Intel(R) Software Guard eXtensions (SGX) is a set of CPU instructions that can be used by applications to set aside private regions of code diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c index 58474480f5be..62cc0e1f0728 100644 --- a/arch/x86/kernel/cpu/sgx/main.c +++ b/arch/x86/kernel/cpu/sgx/main.c @@ -25,6 +25,23 @@ static DEFINE_SPINLOCK(sgx_reclaimer_lock); /* The free page list lock protected variables prepend the lock. */ static unsigned long sgx_nr_free_pages; static LIST_HEAD(sgx_free_page_list); + +/* Nodes with one or more EPC sections. */ +static nodemask_t sgx_numa_mask; + +/* + * Array with one list_head for each possible NUMA node. Each + * list contains all the sgx_epc_section's which are on that + * node. + */ +static struct sgx_numa_node *sgx_numa_nodes; + +/* + * sgx_free_epc_page() uses this to find out the correct struct sgx_numa_node, + * to put the page in. + */ +static int sgx_section_to_numa_node_id[SGX_MAX_EPC_SECTIONS]; + static DEFINE_SPINLOCK(sgx_free_page_list_lock); /* @@ -434,6 +451,36 @@ static bool __init sgx_page_reclaimer_init(struct list_head *laundry) return true; } +static struct sgx_epc_page *__sgx_alloc_epc_page_from_node(int nid) +{ + struct sgx_epc_page *page = NULL; + struct sgx_numa_node *sgx_node; + + if (WARN_ON_ONCE(nid < 0 || nid >= num_possible_nodes())) + return NULL; + + if (!node_isset(nid, sgx_numa_mask)) + return NULL; + + sgx_node = &sgx_numa_nodes[nid]; + + spin_lock(&sgx_free_page_list_lock); + + if (list_empty(&sgx_node->free_page_list)) { + spin_unlock(&sgx_free_page_list_lock); + return NULL; + } + + page = list_first_entry(&sgx_node->free_page_list, struct sgx_epc_page, numa_list); + list_del_init(&page->numa_list); + list_del_init(&page->list); + sgx_nr_free_pages--; + + spin_unlock(&sgx_free_page_list_lock); + + return page; +} + /** * __sgx_alloc_epc_page() - Allocate an EPC page * @@ -446,8 +493,14 @@ static bool __init sgx_page_reclaimer_init(struct list_head *laundry) */ struct sgx_epc_page *__sgx_alloc_epc_page(void) { + int current_nid = numa_node_id(); struct sgx_epc_page *page; + /* Try to allocate EPC from the current node, first: */ + page = __sgx_alloc_epc_page_from_node(current_nid); + if (page) + return page; + spin_lock(&sgx_free_page_list_lock); if (list_empty(&sgx_free_page_list)) { @@ -456,6 +509,7 @@ struct sgx_epc_page *__sgx_alloc_epc_page(void) } page = list_first_entry(&sgx_free_page_list, struct sgx_epc_page, list); + list_del_init(&page->numa_list); list_del_init(&page->list); sgx_nr_free_pages--; @@ -566,6 +620,8 @@ struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) */ void sgx_free_epc_page(struct sgx_epc_page *page) { + int nid = sgx_section_to_numa_node_id[page->section]; + struct sgx_numa_node *sgx_node = &sgx_numa_nodes[nid]; int ret; WARN_ON_ONCE(page->flags & SGX_EPC_PAGE_RECLAIMER_TRACKED); @@ -575,7 +631,15 @@ void sgx_free_epc_page(struct sgx_epc_page *page) return; spin_lock(&sgx_free_page_list_lock); + + /* Enable NUMA local allocation in sgx_alloc_epc_page(). */ + if (!node_isset(nid, sgx_numa_mask)) { + INIT_LIST_HEAD(&sgx_node->free_page_list); + node_set(nid, sgx_numa_mask); + } + list_add_tail(&page->list, &sgx_free_page_list); + list_add_tail(&page->numa_list, &sgx_node->free_page_list); sgx_nr_free_pages++; spin_unlock(&sgx_free_page_list_lock); } @@ -626,8 +690,28 @@ static bool __init sgx_page_cache_init(struct list_head *laundry) { u32 eax, ebx, ecx, edx, type; u64 pa, size; + int nid; int i; + nodes_clear(sgx_numa_mask); + sgx_numa_nodes = kmalloc_array(num_possible_nodes(), sizeof(*sgx_numa_nodes), GFP_KERNEL); + + /* + * Create NUMA node lookup table for sgx_free_epc_page() as the very + * first step, as it is used to populate the free list's during the + * initialization. + */ + for (i = 0; i < ARRAY_SIZE(sgx_epc_sections); i++) { + nid = numa_map_to_online_node(phys_to_target_node(pa)); + if (nid == NUMA_NO_NODE) { + /* The physical address is already printed above. */ + pr_warn(FW_BUG "Unable to map EPC section to online node. Fallback to the NUMA node 0.\n"); + nid = 0; + } + + sgx_section_to_numa_node_id[i] = nid; + } + for (i = 0; i < ARRAY_SIZE(sgx_epc_sections); i++) { cpuid_count(SGX_CPUID, i + SGX_CPUID_EPC, &eax, &ebx, &ecx, &edx); diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index 41ca045a574a..3a3c07fc0c8e 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -27,6 +27,7 @@ struct sgx_epc_page { unsigned int flags; struct sgx_encl_page *owner; struct list_head list; + struct list_head numa_list; }; /* @@ -43,6 +44,14 @@ struct sgx_epc_section { extern struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS]; +/* + * Contains the tracking data for NUMA nodes having EPC pages. Most importantly, + * the free page list local to the node is stored here. + */ +struct sgx_numa_node { + struct list_head free_page_list; +}; + static inline unsigned long sgx_get_epc_phys_addr(struct sgx_epc_page *page) { struct sgx_epc_section *section = &sgx_epc_sections[page->section];