From patchwork Sun Dec 4 18:40:44 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 9460111 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 37A5A60231 for ; Sun, 4 Dec 2016 18:41:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D3642465D for ; Sun, 4 Dec 2016 18:41:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 222A124B91; Sun, 4 Dec 2016 18:41:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 8FF262465D for ; Sun, 4 Dec 2016 18:41:28 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id A5B5C81FC3 for ; Sun, 4 Dec 2016 10:41:28 -0800 (PST) X-Original-To: intel-sgx-kernel-dev@lists.01.org Delivered-To: intel-sgx-kernel-dev@lists.01.org Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 1F8B781FC3 for ; Sun, 4 Dec 2016 10:41:27 -0800 (PST) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP; 04 Dec 2016 10:41:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,743,1477983600"; d="scan'208";a="37005241" Received: from jcortizr-mobl1.ger.corp.intel.com (HELO localhost) ([10.252.2.177]) by orsmga004.jf.intel.com with ESMTP; 04 Dec 2016 10:41:25 -0800 From: Jarkko Sakkinen To: intel-sgx-kernel-dev@lists.01.org Date: Sun, 4 Dec 2016 20:40:44 +0200 Message-Id: <20161204184044.21031-9-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20161204184044.21031-1-jarkko.sakkinen@linux.intel.com> References: <20161204184044.21031-1-jarkko.sakkinen@linux.intel.com> Subject: [intel-sgx-kernel-dev] [PATCH v6 8/8] intel_sgx: add LRU algorithm to page swapping X-BeenThere: intel-sgx-kernel-dev@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Project: Intel® Software Guard Extensions for Linux*: https://01.org/intel-software-guard-extensions" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-sgx-kernel-dev-bounces@lists.01.org Sender: "intel-sgx-kernel-dev" X-Virus-Scanned: ClamAV using ClamSMTP From: Sean Christopherson Test and clear the A bit of an EPC page when isolating pages during EPC page swap. Move accessed pages to the end of the load list instead of the eviction list, i.e. isolate only those pages that have not been accessed since the last time the swapping flow was run. This basic LRU algorithm yields a significant improvement in throughput when the system is under a heavy EPC pressure. This patch is based on the code originally written Serge Ayoun for the out-of-tree driver with a bug fix that the EPC pages are moved always to the end of the list if they have been lately accessed. For unknown reason a version of out-of-tree driver slipped into the Github, which this regression. Signed-off-by: Sean Christopherson Reviewed-by: Jarkko Sakkinen Tested-by: Jarkko Sakkinen --- drivers/platform/x86/intel_sgx.h | 2 +- drivers/platform/x86/intel_sgx_ioctl.c | 1 + drivers/platform/x86/intel_sgx_page_cache.c | 59 +++++++++++++++++++++-------- drivers/platform/x86/intel_sgx_vma.c | 1 + 4 files changed, 46 insertions(+), 17 deletions(-) diff --git a/drivers/platform/x86/intel_sgx.h b/drivers/platform/x86/intel_sgx.h index add3565..b659b71 100644 --- a/drivers/platform/x86/intel_sgx.h +++ b/drivers/platform/x86/intel_sgx.h @@ -193,7 +193,7 @@ long sgx_compat_ioctl(struct file *filep, unsigned int cmd, unsigned long arg); #endif /* Utility functions */ - +int sgx_test_and_clear_young(struct sgx_encl_page *page, struct sgx_encl *encl); void *sgx_get_epc_page(struct sgx_epc_page *entry); void sgx_put_epc_page(void *epc_page_vaddr); struct page *sgx_get_backing(struct sgx_encl *encl, diff --git a/drivers/platform/x86/intel_sgx_ioctl.c b/drivers/platform/x86/intel_sgx_ioctl.c index ab0a4a3..53fa510 100644 --- a/drivers/platform/x86/intel_sgx_ioctl.c +++ b/drivers/platform/x86/intel_sgx_ioctl.c @@ -296,6 +296,7 @@ static bool sgx_process_add_page_req(struct sgx_add_page_req *req) } encl_page->epc_page = epc_page; + sgx_test_and_clear_young(encl_page, encl); list_add_tail(&encl_page->load_list, &encl->load_list); mutex_unlock(&encl->lock); diff --git a/drivers/platform/x86/intel_sgx_page_cache.c b/drivers/platform/x86/intel_sgx_page_cache.c index f2a2ed1..8a411b3 100644 --- a/drivers/platform/x86/intel_sgx_page_cache.c +++ b/drivers/platform/x86/intel_sgx_page_cache.c @@ -77,6 +77,42 @@ static unsigned int sgx_nr_high_pages; struct task_struct *ksgxswapd_tsk; static DECLARE_WAIT_QUEUE_HEAD(ksgxswapd_waitq); + +static int sgx_test_and_clear_young_cb(pte_t *ptep, pgtable_t token, + unsigned long addr, void *data) +{ + pte_t pte; + int ret; + + ret = pte_young(*ptep); + if (ret) { + pte = pte_mkold(*ptep); + set_pte_at((struct mm_struct *)data, addr, ptep, pte); + } + + return ret; +} + +/** + * sgx_test_and_clear_young() - Test and reset the accessed bit + * @page: enclave EPC page to be tested for recent access + * @encl: enclave which owns @page + * + * Checks the Access (A) bit from the PTE corresponding to the + * enclave page and clears it. Returns 1 if the page has been + * recently accessed and 0 if not. + */ +int sgx_test_and_clear_young(struct sgx_encl_page *page, struct sgx_encl *encl) +{ + struct vm_area_struct *vma = sgx_find_vma(encl, page->addr); + + if (!vma) + return 0; + + return apply_to_page_range(vma->vm_mm, page->addr, PAGE_SIZE, + sgx_test_and_clear_young_cb, vma->vm_mm); +} + static struct sgx_tgid_ctx *sgx_isolate_tgid_ctx(unsigned long nr_to_scan) { struct sgx_tgid_ctx *ctx = NULL; @@ -166,7 +202,8 @@ static void sgx_isolate_pages(struct sgx_encl *encl, struct sgx_encl_page, load_list); - if (!(entry->flags & SGX_ENCL_PAGE_RESERVED)) { + if (!sgx_test_and_clear_young(entry, encl) && + !(entry->flags & SGX_ENCL_PAGE_RESERVED)) { entry->flags |= SGX_ENCL_PAGE_RESERVED; list_move_tail(&entry->load_list, dst); } else { @@ -267,19 +304,6 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) entry = list_first_entry(src, struct sgx_encl_page, load_list); - if (!sgx_pin_mm(encl)) { - while (!list_empty(src)) { - entry = list_first_entry(src, struct sgx_encl_page, - load_list); - list_del(&entry->load_list); - mutex_lock(&encl->lock); - sgx_free_encl_page(entry, encl, 0); - mutex_unlock(&encl->lock); - } - - return; - } - mutex_lock(&encl->lock); /* EBLOCK */ @@ -345,8 +369,6 @@ static void sgx_write_pages(struct sgx_encl *encl, struct list_head *src) } mutex_unlock(&encl->lock); - - sgx_unpin_mm(encl); } static void sgx_swap_pages(unsigned long nr_to_scan) @@ -363,9 +385,14 @@ static void sgx_swap_pages(unsigned long nr_to_scan) if (!encl) goto out; + if (!sgx_pin_mm(encl)) + goto out_enclave; + sgx_isolate_pages(encl, &cluster, nr_to_scan); sgx_write_pages(encl, &cluster); + sgx_unpin_mm(encl); +out_enclave: kref_put(&encl->refcount, sgx_encl_release); out: kref_put(&ctx->refcount, sgx_tgid_ctx_release); diff --git a/drivers/platform/x86/intel_sgx_vma.c b/drivers/platform/x86/intel_sgx_vma.c index 4515cc3c..1cf5ba9 100644 --- a/drivers/platform/x86/intel_sgx_vma.c +++ b/drivers/platform/x86/intel_sgx_vma.c @@ -262,6 +262,7 @@ static struct sgx_encl_page *sgx_vma_do_fault(struct vm_area_struct *vma, /* Do not free */ epc_page = NULL; + sgx_test_and_clear_young(entry, encl); list_add_tail(&entry->load_list, &encl->load_list); out: mutex_unlock(&encl->lock);