From patchwork Mon Feb 11 23:27:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10806951 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 632C01575 for ; Mon, 11 Feb 2019 23:28:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54C9D299FB for ; Mon, 11 Feb 2019 23:28:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 489AE29A9D; Mon, 11 Feb 2019 23:28:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 196FB299FB for ; Mon, 11 Feb 2019 23:28:18 +0000 (UTC) Received: (qmail 27917 invoked by uid 550); 11 Feb 2019 23:28:16 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27885 invoked from network); 11 Feb 2019 23:28:16 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:reply-to:mime-version :content-transfer-encoding; bh=WiVvqtIRSUTlDM0fLGtak+4fmHxmqyUWRnseuvynyBA=; b=d4gPdAkIB5pcxMqazVlY0aM9rShWWD8ZDlRnMRbsHx1ZqP5f2pl+Ud860bl0tKxLPW lW5JbjgFkyDiqpSm3g3L/XFha+LtJ3vvWR5GIb8lRlNMN6YQHZshn9hmiCRZphaxRMGy 3iz4GkDL9y7IaUkAPRdBozhH4zzVRtCzyYdeFtUa86M2kRCNUqT8oPaKdNM+MnrKHTJO tQQY/zPwJJFOmEi1gndbn94fxfgRr5xbm9OwG7k7UI5mV8FQgYMqTR15xzUgHKUL+OkC ZgUG0YNWA8Sulsrjd9Um5S/sgIpGcYLoeJsulbdZqvF63WjvEkkMl0eaF82ofcpv4gDO 9N/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:reply-to :mime-version:content-transfer-encoding; bh=WiVvqtIRSUTlDM0fLGtak+4fmHxmqyUWRnseuvynyBA=; b=OcnZaGVYj2UpfzjPVKxdbKUgJDgaXexY2pJ2Cm3/7zQrjZhFewtGHcg5+AOAthoZnq bpIlfWzHUvklXe3n0ta7Uns9jVq9rF+Zio2VqWpNoVrLznjZQKASz1x9udEFKFO3648Y 9FBNuFV20KUT8qwIYpkdz7JRhkXhfeX0Kth4GRLCO5D2NIGRKVxvkWl79POrU1y4NSUm ZdweM/AFLCt2NR0HHYoZSNPQjJOJ/BNdPzEQBIjvOwWiQSDIux8wJEYuaR4DPjVCOIUy ng/XNn3TrUWHzQXfQrOZRYky8LfGwkZyBNz6jyIYUb1r0n/f2rUKDqNWOE5E8hZE5FX+ ZAEA== X-Gm-Message-State: AHQUAuZRLN0kUCU4msqBd5fhc4hD5ooYvwpwjij307pO04Ij3fr4Yd0l hGNlj/rWs3t8LRUWJ5BBkiw= X-Google-Smtp-Source: AHgI3IYYK161pUzr6asXVXNX2dxWTZn19D/paOND9eEjGNQtMhZwuiEhs0iHU0hAE6PbKZCUdlK/KA== X-Received: by 2002:adf:e8c7:: with SMTP id k7mr508447wrn.298.1549927684722; Mon, 11 Feb 2019 15:28:04 -0800 (PST) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Cc: Igor Stoppa , Kees Cook , Ahmed Soliman , linux-integrity , Kernel Hardening , Linux-MM , Linux Kernel Mailing List Subject: [RFC PATCH v4 00/12] hardening: statically allocated protected memory Date: Tue, 12 Feb 2019 01:27:37 +0200 Message-Id: X-Mailer: git-send-email 2.19.1 MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP To: Andy Lutomirski , To: Matthew Wilcox , To: Nadav Amit To: Peter Zijlstra , To: Dave Hansen , To: Mimi Zohar To: Thiago Jung Bauermann CC: Kees Cook CC: Ahmed Soliman CC: linux-integrity CC: Kernel Hardening CC: Linux-MM CC: Linux Kernel Mailing List Hello, at last I'm able to resume work on the memory protection patchset I've proposed some time ago. This version should address comments received so far and introduce support for arm64. Details below. Patch-set implementing write-rare memory protection for statically allocated data. Its purpose is to keep write protected the kernel data which is seldom modified, especially if altering it can be exploited during an attack. There is no read overhead, however writing requires special operations that are probably unsuitable for often-changing data. The use is opt-in, by applying the modifier __wr_after_init to a variable declaration. As the name implies, the write protection kicks in only after init() is completed; before that moment, the data is modifiable in the usual way. Current Limitations: * supports only data which is allocated statically, at build time. * supports only x86_64 and arm64;other architectures need to provide own backend Some notes: - in case an architecture doesn't support write rare, the behavior is to fallback to regular write operations - before altering any memory, the destination is sanitized - write rare data is segregated into own set of pages - only x86_64 and arm64 supported, atm - the memset_user() assembly functions seems to work, but I'm not too sure they are really ok - I've added a simple example: the protection of ima_policy_flags - the last patch is optional, but it seemed worth to do the refactoring - the x86_64 user space address range is double the size of the kernel address space, so it's possible to randomize the beginning of the mapping of the kernel address space, but on arm64 they have the same size, so it's not possible to do the same - I'm not sure if it's correct, since it doesn't seem to be that common in kernel sources, but instead of using #defines for overriding default function calls, I'm using "weak" for the default functions. - unaddressed: Nadav proposed to do: #define __wr __attribute__((address_space(5))) but I don't know exactly where to use it atm Changelog: v3->v4 ------ * added function for setting memory in user space mapping for arm64 * refactored code, to work with both supported architectures * reduced dependency on x86_64 specific code, to support by default also arm64 * improved memset_user() for x86_64, but I'm not sure if I understood correctly what was the best way to enhance it. v2->v3 ------ * both wr_memset and wr_memcpy are implemented as generic functions the arch code must provide suitable helpers * regular initialization for ima_policy_flags: it happens during init * remove spurious code from the initialization function v1->v2 ------ * introduce cleaner split between generic and arch code * add x86_64 specific memset_user() * replace kernel-space memset() memcopy() with userspace counterpart * randomize the base address for the alternate map across the entire available address range from user space (128TB - 64TB) * convert BUG() to WARN() * turn verification of written data into debugging option * wr_rcu_assign_pointer() as special case of wr_assign() * example with protection of ima_policy_flags * documentation Igor Stoppa (12): __wr_after_init: Core and default arch __wr_after_init: x86_64: memset_user() __wr_after_init: x86_64: randomize mapping offset __wr_after_init: x86_64: enable __wr_after_init: arm64: memset_user() __wr_after_init: arm64: enable __wr_after_init: Documentation: self-protection __wr_after_init: lkdtm test __wr_after_init: rodata_test: refactor tests __wr_after_init: rodata_test: test __wr_after_init __wr_after_init: test write rare functionality IMA: turn ima_policy_flags into __wr_after_init Documentation/security/self-protection.rst | 14 +- arch/Kconfig | 7 + arch/arm64/Kconfig | 1 + arch/arm64/include/asm/uaccess.h | 9 ++ arch/arm64/lib/Makefile | 2 +- arch/arm64/lib/memset_user.S (new) | 63 ++++++++ arch/x86/Kconfig | 1 + arch/x86/include/asm/uaccess_64.h | 6 + arch/x86/lib/usercopy_64.c | 51 ++++++ arch/x86/mm/Makefile | 2 + arch/x86/mm/prmem.c (new) | 20 +++ drivers/misc/lkdtm/core.c | 3 + drivers/misc/lkdtm/lkdtm.h | 3 + drivers/misc/lkdtm/perms.c | 29 ++++ include/linux/prmem.h (new) | 71 ++++++++ mm/Kconfig.debug | 8 + mm/Makefile | 2 + mm/prmem.c (new) | 179 +++++++++++++++++++++ mm/rodata_test.c | 69 +++++--- mm/test_write_rare.c (new) | 136 ++++++++++++++++ security/integrity/ima/ima.h | 3 +- security/integrity/ima/ima_policy.c | 9 +- 22 files changed, 656 insertions(+), 32 deletions(-) create mode 100644 arch/arm64/lib/memset_user.S create mode 100644 arch/x86/mm/prmem.c create mode 100644 include/linux/prmem.h create mode 100644 mm/prmem.c create mode 100644 mm/test_write_rare.c