From patchwork Tue Feb 8 00:45:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Reinette Chatre X-Patchwork-Id: 12737999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0576C4321E for ; Tue, 8 Feb 2022 01:07:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240144AbiBHBHC (ORCPT ); Mon, 7 Feb 2022 20:07:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344212AbiBHAqQ (ORCPT ); Mon, 7 Feb 2022 19:46:16 -0500 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76BBFC043181; Mon, 7 Feb 2022 16:46:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1644281175; x=1675817175; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qvTcerr0Ukr9zUBUsKS67976Yiv7XdreJraRPiOOmSE=; b=n+MQr+zCZsi17cdBfERZr9B1yBdgtQaf4jkrCKiIHStcAZi0R0fWBEOC Sb/E2g9xXJHwsTY9pZSPfCJtPHjNGBN/ld464w4bJY2nANkxTbBfAR5on H3EmgBrm5gjtfXIWSriHejtlUblgfe8UyF69RLnVoj2mGV+HN4Y1dzVMz gcJ+0zTq45IZ+Ru+JJe1JSM8sbJHtK0kFU72PocOr4dzk7apnquoU5Mwz qQ4qrLgTZio09CoDd9eco9qvfTf037UpicYHQBWogBO4w4sIAGRubQidq jr1opl7zYPYmsyUb+kfgijHRR2xNykluyKCOcraO43+ehAFlpOdCxO6zG g==; X-IronPort-AV: E=McAfee;i="6200,9189,10251"; a="232407977" X-IronPort-AV: E=Sophos;i="5.88,351,1635231600"; d="scan'208";a="232407977" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2022 16:46:09 -0800 X-IronPort-AV: E=Sophos;i="5.88,351,1635231600"; d="scan'208";a="499389532" Received: from rchatre-ws.ostc.intel.com ([10.54.69.144]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2022 16:46:09 -0800 From: Reinette Chatre To: dave.hansen@linux.intel.com, jarkko@kernel.org, tglx@linutronix.de, bp@alien8.de, luto@kernel.org, mingo@redhat.com, linux-sgx@vger.kernel.org, x86@kernel.org Cc: seanjc@google.com, kai.huang@intel.com, cathy.zhang@intel.com, cedric.xing@intel.com, haitao.huang@intel.com, mark.shanahan@intel.com, hpa@zytor.com, linux-kernel@vger.kernel.org Subject: [PATCH V2 21/32] selftests/sgx: Test two different SGX2 EAUG flows Date: Mon, 7 Feb 2022 16:45:43 -0800 Message-Id: <32c1116934a588bd3e6c174684e3e36a05c0a4d4.1644274683.git.reinette.chatre@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org Enclave pages can be added to an initialized enclave when an address belonging to the enclave but without a backing page is accessed from within the enclave. Accessing memory without a backing enclave page from within an enclave can be in different ways: 1) Pre-emptively run ENCLU[EACCEPT]. Since the addition of a page always needs to be accepted by the enclave via ENCLU[EACCEPT] this flow is efficient since the first execution of ENCLU[EACCEPT] triggers the addition of the page and when execution returns to the same instruction the second execution would be successful as an acceptance of the page. 2) A direct read or write. The flow where a direct read or write triggers the page addition execution cannot resume from the instruction (read/write) that triggered the fault but instead the enclave needs to be entered at a different entry point to run needed ENCLU[EACCEPT] before execution can return to the original entry point and the read/write instruction that faulted. Add tests for both flows. Signed-off-by: Reinette Chatre --- Changes since v1: - Replace __cpuid() definition and usage with __cpuid_count(). - Fix accuracy of comments. tools/testing/selftests/sgx/main.c | 243 +++++++++++++++++++++++++++++ 1 file changed, 243 insertions(+) diff --git a/tools/testing/selftests/sgx/main.c b/tools/testing/selftests/sgx/main.c index 1398cd1b0983..68285603b3f0 100644 --- a/tools/testing/selftests/sgx/main.c +++ b/tools/testing/selftests/sgx/main.c @@ -86,6 +86,15 @@ static bool vdso_get_symtab(void *addr, struct vdso_symtab *symtab) return true; } +static inline int sgx2_supported(void) +{ + unsigned int eax, ebx, ecx, edx; + + __cpuid_count(SGX_CPUID, 0x0, eax, ebx, ecx, edx); + + return eax & 0x2; +} + static unsigned long elf_sym_hash(const char *name) { unsigned long h = 0, high; @@ -882,4 +891,238 @@ TEST_F(enclave, epcm_permissions) EXPECT_EQ(self->run.exception_addr, 0); } +/* + * Test the addition of pages to an initialized enclave via writing to + * a page belonging to the enclave's address space but was not added + * during enclave creation. + */ +TEST_F(enclave, augment) +{ + struct encl_op_get_from_addr get_addr_op; + struct encl_op_put_to_addr put_addr_op; + struct encl_op_eaccept eaccept_op; + size_t total_size = 0; + void *addr; + int i; + + if (!sgx2_supported()) + SKIP(return, "SGX2 not supported"); + + ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata)); + + memset(&self->run, 0, sizeof(self->run)); + self->run.tcs = self->encl.encl_base; + + for (i = 0; i < self->encl.nr_segments; i++) { + struct encl_segment *seg = &self->encl.segment_tbl[i]; + + total_size += seg->size; + } + + /* + * Actual enclave size is expected to be larger than the loaded + * test enclave since enclave size must be a power of 2 in bytes + * and test_encl does not consume it all. + */ + EXPECT_LT(total_size + PAGE_SIZE, self->encl.encl_size); + + /* + * Create memory mapping for the page that will be added. New + * memory mapping is for one page right after all existing + * mappings. + */ + addr = mmap((void *)self->encl.encl_base + total_size, PAGE_SIZE, + PROT_READ | PROT_WRITE | PROT_EXEC, + MAP_SHARED | MAP_FIXED, self->encl.fd, 0); + EXPECT_NE(addr, MAP_FAILED); + + self->run.exception_vector = 0; + self->run.exception_error_code = 0; + self->run.exception_addr = 0; + + /* + * Attempt to write to the new page from within enclave. + * Expected to fail since page is not (yet) part of the enclave. + * The first #PF will trigger the addition of the page to the + * enclave, but since the new page needs an EACCEPT from within the + * enclave before it can be used it would not be possible + * to successfully return to the failing instruction. This is the + * cause of the second #PF captured here having the SGX bit set, + * it is from hardware preventing the page from being used. + */ + put_addr_op.value = MAGIC; + put_addr_op.addr = (unsigned long)addr; + put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS; + + EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0); + + EXPECT_EQ(self->run.function, ERESUME); + EXPECT_EQ(self->run.exception_vector, 14); + EXPECT_EQ(self->run.exception_addr, (unsigned long)addr); + + if (self->run.exception_error_code == 0x6) { + munmap(addr, PAGE_SIZE); + SKIP(return, "Kernel does not support adding pages to initialized enclave"); + } + + EXPECT_EQ(self->run.exception_error_code, 0x8007); + + self->run.exception_vector = 0; + self->run.exception_error_code = 0; + self->run.exception_addr = 0; + + /* Handle AEX by running EACCEPT from new entry point. */ + self->run.tcs = self->encl.encl_base + PAGE_SIZE; + + eaccept_op.epc_addr = self->encl.encl_base + total_size; + eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING; + eaccept_op.ret = 0; + eaccept_op.header.type = ENCL_OP_EACCEPT; + + EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0); + + EXPECT_EEXIT(&self->run); + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + EXPECT_EQ(eaccept_op.ret, 0); + + /* Can now return to main TCS to resume execution. */ + self->run.tcs = self->encl.encl_base; + + EXPECT_EQ(vdso_sgx_enter_enclave((unsigned long)&put_addr_op, 0, 0, + ERESUME, 0, 0, + &self->run), + 0); + + EXPECT_EEXIT(&self->run); + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + + /* + * Read memory from newly added page that was just written to, + * confirming that data previously written (MAGIC) is present. + */ + get_addr_op.value = 0; + get_addr_op.addr = (unsigned long)addr; + get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS; + + EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0); + + EXPECT_EQ(get_addr_op.value, MAGIC); + EXPECT_EEXIT(&self->run); + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + + munmap(addr, PAGE_SIZE); +} + +/* + * Test for the addition of pages to an initialized enclave via a + * pre-emptive run of EACCEPT on page to be added. + */ +TEST_F(enclave, augment_via_eaccept) +{ + struct encl_op_get_from_addr get_addr_op; + struct encl_op_put_to_addr put_addr_op; + struct encl_op_eaccept eaccept_op; + size_t total_size = 0; + void *addr; + int i; + + if (!sgx2_supported()) + SKIP(return, "SGX2 not supported"); + + ASSERT_TRUE(setup_test_encl(ENCL_HEAP_SIZE_DEFAULT, &self->encl, _metadata)); + + memset(&self->run, 0, sizeof(self->run)); + self->run.tcs = self->encl.encl_base; + + for (i = 0; i < self->encl.nr_segments; i++) { + struct encl_segment *seg = &self->encl.segment_tbl[i]; + + total_size += seg->size; + } + + /* + * Actual enclave size is expected to be larger than the loaded + * test enclave since enclave size must be a power of 2 in bytes while + * test_encl does not consume it all. + */ + EXPECT_LT(total_size + PAGE_SIZE, self->encl.encl_size); + + /* + * mmap() a page at end of existing enclave to be used for dynamic + * EPC page. + */ + + addr = mmap((void *)self->encl.encl_base + total_size, PAGE_SIZE, + PROT_READ | PROT_WRITE | PROT_EXEC, MAP_SHARED | MAP_FIXED, + self->encl.fd, 0); + EXPECT_NE(addr, MAP_FAILED); + + self->run.exception_vector = 0; + self->run.exception_error_code = 0; + self->run.exception_addr = 0; + + /* + * Run EACCEPT on new page to trigger the #PF->EAUG->EACCEPT(again + * without a #PF). All should be transparent to userspace. + */ + eaccept_op.epc_addr = self->encl.encl_base + total_size; + eaccept_op.flags = SGX_SECINFO_R | SGX_SECINFO_W | SGX_SECINFO_REG | SGX_SECINFO_PENDING; + eaccept_op.ret = 0; + eaccept_op.header.type = ENCL_OP_EACCEPT; + + EXPECT_EQ(ENCL_CALL(&eaccept_op, &self->run, true), 0); + + if (self->run.exception_vector == 14 && + self->run.exception_error_code == 4 && + self->run.exception_addr == self->encl.encl_base + total_size) { + munmap(addr, PAGE_SIZE); + SKIP(return, "Kernel does not support adding pages to initialized enclave"); + } + + EXPECT_EEXIT(&self->run); + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + EXPECT_EQ(eaccept_op.ret, 0); + + /* + * New page should be accessible from within enclave - attempt to + * write to it. + */ + put_addr_op.value = MAGIC; + put_addr_op.addr = (unsigned long)addr; + put_addr_op.header.type = ENCL_OP_PUT_TO_ADDRESS; + + EXPECT_EQ(ENCL_CALL(&put_addr_op, &self->run, true), 0); + + EXPECT_EEXIT(&self->run); + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + + /* + * Read memory from newly added page that was just written to, + * confirming that data previously written (MAGIC) is present. + */ + get_addr_op.value = 0; + get_addr_op.addr = (unsigned long)addr; + get_addr_op.header.type = ENCL_OP_GET_FROM_ADDRESS; + + EXPECT_EQ(ENCL_CALL(&get_addr_op, &self->run, true), 0); + + EXPECT_EQ(get_addr_op.value, MAGIC); + EXPECT_EEXIT(&self->run); + EXPECT_EQ(self->run.exception_vector, 0); + EXPECT_EQ(self->run.exception_error_code, 0); + EXPECT_EQ(self->run.exception_addr, 0); + + munmap(addr, PAGE_SIZE); +} + TEST_HARNESS_MAIN