From patchwork Wed Nov 28 00:07:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 10701665 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1DE6713BB for ; Wed, 28 Nov 2018 00:34:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0D5812C8D9 for ; Wed, 28 Nov 2018 00:34:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 017312C8DE; Wed, 28 Nov 2018 00:34:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 367532C8D9 for ; Wed, 28 Nov 2018 00:34:42 +0000 (UTC) Received: (qmail 11344 invoked by uid 550); 28 Nov 2018 00:34:29 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 11300 invoked from network); 28 Nov 2018 00:34:28 -0000 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,288,1539673200"; d="scan'208";a="91641474" From: Rick Edgecombe To: akpm@linux-foundation.org, luto@kernel.org, will.deacon@arm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com, naveen.n.rao@linux.vnet.ibm.com, anil.s.keshavamurthy@intel.com, davem@davemloft.net, mhiramat@kernel.org, rostedt@goodmis.org, mingo@redhat.com, ast@kernel.org, daniel@iogearbox.net, jeyu@kernel.org, netdev@vger.kernel.org, ard.biesheuvel@linaro.org, jannh@google.com Cc: kristen@linux.intel.com, dave.hansen@intel.com, deneen.t.dock@intel.com, Rick Edgecombe Subject: =?utf-8?q?=5BPATCH_0/2=5D_Don=E2=80=99t_leave_executable_TLB_entrie?= =?utf-8?q?s_to_freed_pages?= Date: Tue, 27 Nov 2018 16:07:52 -0800 Message-Id: <20181128000754.18056-1-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Sometimes when memory is freed via the module subsystem, an executable permissioned TLB entry can remain to a freed page. If the page is re-used to back an address that will receive data from userspace, it can result in user data being mapped as executable in the kernel. The root of this behavior is vfree lazily flushing the TLB, but not lazily freeing the underlying pages. There are sort of three categories of this which show up across modules, bpf, kprobes and ftrace: 1. When executable memory is touched and then immediatly freed This shows up in a couple error conditions in the module loader and BPF JIT compiler. 2. When executable memory is set to RW right before being freed In this case (on x86 and probably others) there will be a TLB flush when its set to RW and so since the pages are not touched between setting the flush and the free, it should not be in the TLB in most cases. So this category is not as big of a concern. However, techinically there is still a race where an attacker could try to keep it alive for a short window with a well timed out-of-bound read or speculative read, so ideally this could be blocked as well. 3. When executable memory is freed in an interrupt At least one example of this is the freeing of init sections in the module loader. Since vmalloc reuses the allocation for the work queue linked list node for the deferred frees, the memory actually gets touched as part of the vfree operation and so returns to the TLB even after the flush from resetting the permissions. I have only actually tested category 1, and identified 2 and 3 just from reading the code. To catch all of these, module_alloc for x86 is changed to use a new flag that instructs the unmap operation to flush the TLB before freeing the pages. If this solution seems good I can plug the flag in for other architectures that define PAGE_KERNEL_EXEC. Rick Edgecombe (2): vmalloc: New flag for flush before releasing pages x86/modules: Make x86 allocs to flush when free arch/x86/kernel/module.c | 4 ++-- include/linux/vmalloc.h | 1 + mm/vmalloc.c | 13 +++++++++++-- 3 files changed, 14 insertions(+), 4 deletions(-)