From patchwork Wed Feb 27 20:01:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10832285 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63FED139A for ; Wed, 27 Feb 2019 20:02:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 565222E838 for ; Wed, 27 Feb 2019 20:02:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 548D62E86D; Wed, 27 Feb 2019 20:02:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 708CA2E85E for ; Wed, 27 Feb 2019 20:02:18 +0000 (UTC) Received: (qmail 3766 invoked by uid 550); 27 Feb 2019 20:01:55 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3627 invoked from network); 27 Feb 2019 20:01:53 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vzz1FQN1jJNmckhFM7jqyw+Eyl5dEJHVvmFRIHlT1C8=; b=SGQ/y+brpOs/JqYhS5bcF63efY4uCnTaDn1aLtIltGZ7xEeLWAMgMbuUymxbfABuAj yvzueaBcI9pMRMyk6Q0jLyZqNKjCms5YABwY5J/4ohmBI6J5gvUVlo8y4WxLVdwQtg6I TKmbiTxhkMY11pQfE9O4Lb4xZGHEEqbafpzd0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vzz1FQN1jJNmckhFM7jqyw+Eyl5dEJHVvmFRIHlT1C8=; b=GE8G3TYRH1rW9UrHdH2kZIsm+UAfFqcV75i2k+DjDAfPf1yV7jB6T3TzhjIgjMszXd FSHngnOOrtMH4wHZv04wC+l4Libl6TgVKBzb9zOeiZVvJX255kmW7WovnzK0SO5EfuH1 lqrhfEOihXyY/oWRXEAbGxkkl1qj02eahq8+8xX+5hoa/ck801Qfzq3s+po7oUFy52qY rTQEb6alTUBR7daIdNiUCdXrm++BE2H6IR1ET/XRKM7iZ91bl7gVCkkyj5GWRhnVrnZo aH0dlA2Xv7PRUO9AlbAhGb1CpQEw5Ob0qtRGMlF1ayFa8LHnbuPSGPjKRFnabgzEKIZb UTgw== X-Gm-Message-State: AHQUAuam3zNouZR5Qt8wcPX8raGKwp3FGXiUnTEgnwbGDLU0xH+AsmbC sDW3c9tXUG8TN9l41Z/eR4v0Rw== X-Google-Smtp-Source: AHgI3IYt70D/0LlJK+xicUiFn5VvA2shPPKvjRnzWaY0zyjgpiWfszNSK+ubT6JIE2ABxcEYkMdunA== X-Received: by 2002:a63:1155:: with SMTP id 21mr4378829pgr.96.1551297701730; Wed, 27 Feb 2019 12:01:41 -0800 (PST) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Peter Zijlstra , Solar Designer , Greg KH , Jann Horn , Sean Christopherson , Dominik Brodowski , linux-kernel@vger.kernel.org, Kernel Hardening Subject: [PATCH v2 1/3] x86/asm: Pin sensitive CR0 bits Date: Wed, 27 Feb 2019 12:01:30 -0800 Message-Id: <20190227200132.24707-2-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190227200132.24707-1-keescook@chromium.org> References: <20190227200132.24707-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP With sensitive CR4 bits pinned now, it's possible that the WP bit for CR0 might become a target as well. Following the same reasoning for the CR4 pinning, this pins CR0's WP bit (but this can be done with a static value). As before, to convince the compiler to not optimize away the check for the WP bit after the set, this marks "val" as an output from the asm() block. This protects against just jumping into the function past where the masking happens; we must check that the mask was applied after we do the set). Due to how this function can be built by the compiler (especially due to the removal of frame pointers), jumping into the middle of the function frequently doesn't require stack manipulation to construct a stack frame (there may only a retq without pops, which is sufficient for use with exploits like timer overwrites). Additionally, this avoids WARN()ing before resetting the bit, just to minimize any race conditions with leaving the bit unset. Suggested-by: Peter Zijlstra Signed-off-by: Kees Cook --- arch/x86/include/asm/special_insns.h | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index fabda1400137..1f01dc3f6c64 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -25,7 +25,28 @@ static inline unsigned long native_read_cr0(void) static inline void native_write_cr0(unsigned long val) { - asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order)); + bool warn = false; + +again: + val |= X86_CR0_WP; + /* + * In order to have the compiler not optimize away the check + * after the cr4 write, mark "val" as being also an output ("+r") + * by this asm() block so it will perform an explicit check, as + * if it were "volatile". + */ + asm volatile("mov %0,%%cr0" : "+r" (val) : "m" (__force_order) : ); + /* + * If the MOV above was used directly as a ROP gadget we can + * notice the lack of pinned bits in "val" and start the function + * from the beginning to gain the WP bit for sure. And do it + * without first taking the exception for a WARN(). + */ + if ((val & X86_CR0_WP) != X86_CR0_WP) { + warn = true; + goto again; + } + WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n"); } static inline unsigned long native_read_cr2(void) From patchwork Wed Feb 27 20:01:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10832281 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 589F2922 for ; Wed, 27 Feb 2019 20:02:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 490AE2E7EB for ; Wed, 27 Feb 2019 20:02:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 474DC2E85E; Wed, 27 Feb 2019 20:02:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 771172E85B for ; Wed, 27 Feb 2019 20:02:04 +0000 (UTC) Received: (qmail 3572 invoked by uid 550); 27 Feb 2019 20:01:53 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3480 invoked from network); 27 Feb 2019 20:01:52 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xH132//de+yu6qTUcGSCNrTMI70q0Q584KNYfmrdhFA=; b=iz3GvEBp9BU1O0JNjEaqgMzN+pR7UXZGZPAIAk2Yg2YR1SoycZvpMO7W2Q8CpN7K7s 9dDf/WhDmYB2ktHkKPXB5CT11ryFS/o8iHXO5/tyUN4dCi3d8wBllFqD5k9gQ5+SnBIC 50AW0+QGwCKj6icLruGO5U5ZoBF5INcoMMi0M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xH132//de+yu6qTUcGSCNrTMI70q0Q584KNYfmrdhFA=; b=kWd3tWf9QKxdKjosXtTHineikkhNSY5gh3MXu+RNSgMdF6wopHj6gr+bZMzYautSVa 03GqaW9OWJW+QUlj8l+dcFUFNNF8jUbSm+nhVebX88rtz0tyG6bIUnVzcg5QEj/BzgUq vuCyY81QO+h/evlrNUks92wMfJmTieL8FAqORUG5/+/KiRO3qabYkZdBMh99IxKLM4jE r2XzS90fCM3m3wYGo9ccm/uxaYahTV2D7AUqseODHTjBqu0uGC+2YqnRZ17JYqKO4QHF ItQYNJ2d00G+6pCenqTYKubdjp6U77xAflnn+SFYnGT3HHfRWnSLcXkZnwfsVPDJziRo 5txg== X-Gm-Message-State: AHQUAuaJEOKtwsVcy+vfsx/CWNPoh50xr+Md+57Xf+TqrnmvJFfT7wya S6An+jGYBw1XSAWpJKkXJhGeQg== X-Google-Smtp-Source: AHgI3IYQIEFhf8kOBuMM2neu5iHszJ3l52qceUNgvO3eG0awOBVYOimg3In++a3r51eK2ekS9eF/NA== X-Received: by 2002:a63:9149:: with SMTP id l70mr4627938pge.65.1551297700046; Wed, 27 Feb 2019 12:01:40 -0800 (PST) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Peter Zijlstra , Solar Designer , Greg KH , Jann Horn , Sean Christopherson , Dominik Brodowski , linux-kernel@vger.kernel.org, Kernel Hardening Subject: [PATCH v2 2/3] x86/asm: Avoid taking an exception before cr4 restore Date: Wed, 27 Feb 2019 12:01:31 -0800 Message-Id: <20190227200132.24707-3-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190227200132.24707-1-keescook@chromium.org> References: <20190227200132.24707-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP Instead of taking a full WARN() exception before restoring a potentially missed CR4 bit, this retains the missing bit for later reporting. This matches the logic done for the CR0 pinning. Additionally updates the comments to note the required use of "volatile". Suggested-by: Solar Designer Signed-off-by: Kees Cook --- arch/x86/include/asm/special_insns.h | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 1f01dc3f6c64..6020cb1de66e 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -97,18 +97,24 @@ extern volatile unsigned long cr4_pin; static inline void native_write_cr4(unsigned long val) { + unsigned long warn = 0; + again: val |= cr4_pin; asm volatile("mov %0,%%cr4": : "r" (val), "m" (__force_order)); /* * If the MOV above was used directly as a ROP gadget we can * notice the lack of pinned bits in "val" and start the function - * from the beginning to gain the cr4_pin bits for sure. + * from the beginning to gain the cr4_pin bits for sure. Note + * that "val" must be volatile to keep the compiler from + * optimizing away this check. */ - if (WARN_ONCE((val & cr4_pin) != cr4_pin, - "Attempt to unpin cr4 bits: %lx, cr4 bypass attack?!", - ~val & cr4_pin)) + if ((val & cr4_pin) != cr4_pin) { + warn = ~val & cr4_pin; goto again; + } + WARN_ONCE(warn, "Attempt to unpin cr4 bits: %lx; bypass attack?!\n", + warn); } #ifdef CONFIG_X86_64 From patchwork Wed Feb 27 20:01:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 10832279 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EC129922 for ; Wed, 27 Feb 2019 20:01:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DDE252E855 for ; Wed, 27 Feb 2019 20:01:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DC2FB2E858; Wed, 27 Feb 2019 20:01:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id C4EA12E858 for ; Wed, 27 Feb 2019 20:01:58 +0000 (UTC) Received: (qmail 3512 invoked by uid 550); 27 Feb 2019 20:01:52 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 3472 invoked from network); 27 Feb 2019 20:01:51 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Ng4Zkb9y3cPtt6t/3SEFksgPr5rwgDuVnUoA7SYlefo=; b=I/Q8n0YevaKztCBpqsN8KK5Y7qgshGD3+jBlYthpJ4YeV9MXBJ51zgDFskFRuUTLV9 JTTFbYXLdIjMPWH1vH7PuglIhk4zPuQtcoYseCnM/JlE/EQDiHv0P1kUsaOssU/Fmc6t 5/yvZ14MvjTycoaEox0CPOoaZpcChKc4wJXWA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ng4Zkb9y3cPtt6t/3SEFksgPr5rwgDuVnUoA7SYlefo=; b=JmCZzzXjtMWQ80hz60vY/qn9tgqJV7FfHWCvnn1qkVOz5zBKmQ5RLXcQhqCR1oNCWT QbGeGH5V2QaMGhRtcJUYHdKxL3Z/DRARNlFnhpC2++sK5cND5YNs2+n3bzOzb89TZnsb RkrNAhOB93XotMwQtWQ8MHwobfue12moPvkxVuRS7//ngqbJ8Q0s0XrNqB6iQe1pRGy3 H1+BJ5KseXJ+eKd8OR44QivsY+zRAP4xMKIryUbiUmAEJzgNADrLKJE0HOKGO6HTio5t lxzZapRfGW2Gvtn3fvTmtBv1cbIcgYtF4ZJd8g2Wx2TJ6GMi1oW6IeUdCJijVUH8wbOy 56iA== X-Gm-Message-State: AHQUAubcQNv5kFHH78BBfXWlJWU6zoOBPPUT/IZ3RO7XsnTT3Aq+zg97 P2pNRFxrZ4RpoL1AasT6oV91lw== X-Google-Smtp-Source: AHgI3IY8J+KaE/37ctRfWwsz5ObaGPKtZX8jdIBZNnrcVA8qoFcHTxMeoD8v4mn2bPm0jJAZLRK2fQ== X-Received: by 2002:a17:902:7614:: with SMTP id k20mr3898990pll.298.1551297699469; Wed, 27 Feb 2019 12:01:39 -0800 (PST) From: Kees Cook To: Thomas Gleixner Cc: Kees Cook , Peter Zijlstra , Solar Designer , Greg KH , Jann Horn , Sean Christopherson , Dominik Brodowski , linux-kernel@vger.kernel.org, Kernel Hardening Subject: [PATCH v2 3/3] lkdtm: Check for SMEP clearing protections Date: Wed, 27 Feb 2019 12:01:32 -0800 Message-Id: <20190227200132.24707-4-keescook@chromium.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190227200132.24707-1-keescook@chromium.org> References: <20190227200132.24707-1-keescook@chromium.org> X-Virus-Scanned: ClamAV using ClamSMTP This adds an x86-specific test for pinned cr4 bits. A successful test will validate pinning and check the ROP-style call-middle-of-function defense, if needed. For example, in the case of native_write_cr4() looking like this: ffffffff8171bce0 : ffffffff8171bce0: 48 8b 35 79 46 f2 00 mov 0xf24679(%rip),%rsi ffffffff8171bce7: 48 09 f7 or %rsi,%rdi ffffffff8171bcea: 0f 22 e7 mov %rdi,%cr4 ... ffffffff8171bd5a: c3 retq The UNSET_SMEP test will jump to ffffffff8171bcea (the mov to cr4) instead of ffffffff8171bce0 (native_write_cr4() entry) to simulate a direct-call bypass attempt. Expected successful results: # echo UNSET_SMEP > /sys/kernel/debug/provoke-crash/DIRECT # dmesg [ 79.594433] lkdtm: Performing direct entry UNSET_SMEP [ 79.596459] lkdtm: trying to clear SMEP normally [ 79.598406] lkdtm: ok: SMEP did not get cleared [ 79.599981] lkdtm: trying to clear SMEP with call gadget [ 79.601810] ------------[ cut here ]------------ [ 79.603421] Attempt to unpin cr4 bits: 100000; bypass attack?! ... [ 79.650170] ---[ end trace 2452ca0f6126242e ]--- [ 79.650937] lkdtm: ok: SMEP removal was reverted Signed-off-by: Kees Cook --- drivers/misc/lkdtm/bugs.c | 61 ++++++++++++++++++++++++++++++++++++++ drivers/misc/lkdtm/core.c | 1 + drivers/misc/lkdtm/lkdtm.h | 1 + 3 files changed, 63 insertions(+) diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c index 7eebbdfbcacd..6176384b4f85 100644 --- a/drivers/misc/lkdtm/bugs.c +++ b/drivers/misc/lkdtm/bugs.c @@ -255,3 +255,64 @@ void lkdtm_STACK_GUARD_PAGE_TRAILING(void) pr_err("FAIL: accessed page after stack!\n"); } + +void lkdtm_UNSET_SMEP(void) +{ +#ifdef CONFIG_X86_64 +#define MOV_CR4_DEPTH 64 + void (*direct_write_cr4)(unsigned long val); + unsigned char *insn; + unsigned long cr4; + int i; + + cr4 = native_read_cr4(); + + if ((cr4 & X86_CR4_SMEP) != X86_CR4_SMEP) { + pr_err("FAIL: SMEP not in use\n"); + return; + } + cr4 &= ~(X86_CR4_SMEP); + + pr_info("trying to clear SMEP normally\n"); + native_write_cr4(cr4); + if (cr4 == native_read_cr4()) { + pr_err("FAIL: pinning SMEP failed!\n"); + cr4 |= X86_CR4_SMEP; + pr_info("restoring SMEP\n"); + native_write_cr4(cr4); + return; + } + pr_info("ok: SMEP did not get cleared\n"); + + /* + * To test the post-write pinning verification we need to call + * directly into the the middle of native_write_cr4() where the + * cr4 write happens, skipping the pinning. This searches for + * the cr4 writing instruction. + */ + insn = (unsigned char *)native_write_cr4; + for (i = 0; i < MOV_CR4_DEPTH; i++) { + /* mov %rdi, %cr4 */ + if (insn[i] == 0x0f && insn[i+1] == 0x22 && insn[i+2] == 0xe7) + break; + } + if (i >= MOV_CR4_DEPTH) { + pr_info("ok: cannot locate cr4 writing call gadget\n"); + return; + } + direct_write_cr4 = (void *)(insn + i); + + pr_info("trying to clear SMEP with call gadget\n"); + direct_write_cr4(cr4); + if (native_read_cr4() & X86_CR4_SMEP) { + pr_info("ok: SMEP removal was reverted\n"); + } else { + pr_err("FAIL: cleared SMEP not detected!\n"); + cr4 |= X86_CR4_SMEP; + pr_info("restoring SMEP\n"); + native_write_cr4(cr4); + } +#else + pr_err("FAIL: this test is x86_64-only\n"); +#endif +} diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index 2837dc77478e..fd668776414b 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -132,6 +132,7 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(CORRUPT_LIST_ADD), CRASHTYPE(CORRUPT_LIST_DEL), CRASHTYPE(CORRUPT_USER_DS), + CRASHTYPE(UNSET_SMEP), CRASHTYPE(CORRUPT_STACK), CRASHTYPE(CORRUPT_STACK_STRONG), CRASHTYPE(STACK_GUARD_PAGE_LEADING), diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index 3c6fd327e166..9c78d7e21c13 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -26,6 +26,7 @@ void lkdtm_CORRUPT_LIST_DEL(void); void lkdtm_CORRUPT_USER_DS(void); void lkdtm_STACK_GUARD_PAGE_LEADING(void); void lkdtm_STACK_GUARD_PAGE_TRAILING(void); +void lkdtm_UNSET_SMEP(void); /* lkdtm_heap.c */ void lkdtm_OVERWRITE_ALLOCATION(void);