From patchwork Sat Dec 16 16:19:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarkko Sakkinen X-Patchwork-Id: 10117233 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C9E9C60327 for ; Sat, 16 Dec 2017 16:27:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB14C29400 for ; Sat, 16 Dec 2017 16:27:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AFCBD29402; Sat, 16 Dec 2017 16:27:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 14F6729400 for ; Sat, 16 Dec 2017 16:27:47 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 538CB22085871; Sat, 16 Dec 2017 08:23:04 -0800 (PST) X-Original-To: intel-sgx-kernel-dev@lists.01.org Delivered-To: intel-sgx-kernel-dev@lists.01.org Received-SPF: None (no SPF record) identity=mailfrom; client-ip=134.134.136.31; helo=mga06.intel.com; envelope-from=jarkko.sakkinen@linux.intel.com; receiver=intel-sgx-kernel-dev@lists.01.org Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 711CD2208586D for ; Sat, 16 Dec 2017 08:23:03 -0800 (PST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Dec 2017 08:27:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,411,1508828400"; d="scan'208";a="3367210" Received: from bvejlgaa-mobl.ger.corp.intel.com (HELO localhost) ([10.249.254.12]) by orsmga006.jf.intel.com with ESMTP; 16 Dec 2017 08:27:42 -0800 From: Jarkko Sakkinen To: intel-sgx-kernel-dev@lists.01.org, platform-driver-x86@vger.kernel.org, x86@kernel.org Date: Sat, 16 Dec 2017 18:19:52 +0200 Message-Id: <20171216162200.20243-6-jarkko.sakkinen@linux.intel.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171216162200.20243-1-jarkko.sakkinen@linux.intel.com> References: <20171216162200.20243-1-jarkko.sakkinen@linux.intel.com> Subject: [intel-sgx-kernel-dev] [PATCH v9 5/7] intel_sgx: ptrace() support X-BeenThere: intel-sgx-kernel-dev@lists.01.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: =?iso-8859-1?q?Project=3A_Intel=AE_Software_Guard_Extensions_for_Linux*=3A_https=3A//01=2Eorg/intel-software-guard-extensions?= List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Darren Hart , linux-kernel@vger.kernel.org, Andy Shevchenko MIME-Version: 1.0 Errors-To: intel-sgx-kernel-dev-bounces@lists.01.org Sender: "intel-sgx-kernel-dev" X-Virus-Scanned: ClamAV using ClamSMTP Implemented VMA callbacks in order to ptrace() debug enclaves. With debug enclaves data can be read and write the memory word at a time by using ENCLS(EDBGRD) and ENCLS(EDBGWR) leaf instructions. Signed-off-by: Jarkko Sakkinen Tested-by: Serge Ayoun --- drivers/platform/x86/intel_sgx/sgx_vma.c | 119 +++++++++++++++++++++++++++++++ 1 file changed, 119 insertions(+) diff --git a/drivers/platform/x86/intel_sgx/sgx_vma.c b/drivers/platform/x86/intel_sgx/sgx_vma.c index 481f671f10ca..2bce40ef6823 100644 --- a/drivers/platform/x86/intel_sgx/sgx_vma.c +++ b/drivers/platform/x86/intel_sgx/sgx_vma.c @@ -110,8 +110,127 @@ static int sgx_vma_fault(struct vm_fault *vmf) return VM_FAULT_SIGBUS; } +static int sgx_edbgrd(struct sgx_encl *encl, struct sgx_encl_page *page, + unsigned long addr, void *data) +{ + unsigned long offset; + void *ptr; + int ret; + + offset = addr & ~PAGE_MASK; + + if ((page->desc & SGX_ENCL_PAGE_TCS) && + (offset + sizeof(unsigned long)) > + offsetof(struct sgx_tcs, reserved)) + return -ECANCELED; + + ptr = sgx_get_page(page->epc_page); + ret = __edbgrd((unsigned long)ptr + offset, data); + sgx_put_page(ptr); + if (ret) { + sgx_dbg(encl, "EDBGRD returned %d\n", ret); + return -EFAULT; + } + + return 0; +} + +static int sgx_edbgwr(struct sgx_encl *encl, struct sgx_encl_page *page, + unsigned long addr, void *data) +{ + unsigned long offset; + void *ptr; + int ret; + + offset = addr & ~PAGE_MASK; + + /* Writing anything else than flags will cause #GP */ + if ((page->desc & SGX_ENCL_PAGE_TCS) && + offset < offsetof(struct sgx_tcs, flags) && + (offset + sizeof(unsigned long)) > + offsetof(struct sgx_tcs, flags)) + return -ECANCELED; + + ptr = sgx_get_page(page->epc_page); + ret = __edbgwr((unsigned long)ptr + offset, data); + sgx_put_page(ptr); + if (ret) { + sgx_dbg(encl, "EDBGWR returned %d\n", ret); + return -EFAULT; + } + + return 0; +} + +static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr, + void *buf, int len, int write) +{ + struct sgx_encl *encl = vma->vm_private_data; + struct sgx_encl_page *entry = NULL; + unsigned long align; + char data[sizeof(unsigned long)]; + int offset; + int cnt; + int ret = 0; + int i; + + /* If process was forked, VMA is still there but vm_private_data is set + * to NULL. + */ + if (!encl) + return -EFAULT; + + if (!(encl->flags & SGX_ENCL_DEBUG) || + !(encl->flags & SGX_ENCL_INITIALIZED) || + (encl->flags & SGX_ENCL_DEAD)) + return -EFAULT; + + for (i = 0; i < len; i += cnt) { + if (!entry || !((addr + i) & (PAGE_SIZE - 1))) { + if (entry) + entry->desc &= ~SGX_ENCL_PAGE_RESERVED; + + entry = sgx_fault_page(vma, (addr + i) & PAGE_MASK, + SGX_FAULT_RESERVE); + if (IS_ERR(entry)) { + ret = PTR_ERR(entry); + entry = NULL; + break; + } + } + + /* Locking is not needed because only immutable fields of the + * page are accessed and page itself is reserved so that it + * cannot be swapped out in the middle. + */ + + align = ALIGN_DOWN(addr + i, sizeof(unsigned long)); + offset = (addr + i) & (sizeof(unsigned long) - 1); + cnt = sizeof(unsigned long) - offset; + cnt = min(cnt, len - i); + + ret = sgx_edbgrd(encl, entry, align, data); + if (ret) + return ret; + + memcpy(data + offset, buf + i, cnt); + + if (write) { + ret = sgx_edbgwr(encl, entry, align, data); + if (ret) + return ret; + } + } + + if (entry) + entry->desc &= ~SGX_ENCL_PAGE_RESERVED; + + return (ret < 0 && ret != -ECANCELED) ? ret : i; +} + const struct vm_operations_struct sgx_vm_ops = { .close = sgx_vma_close, .open = sgx_vma_open, .fault = sgx_vma_fault, + .access = sgx_vma_access, };