From patchwork Wed Dec 19 21:33:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10738165 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A44B114E5 for ; Wed, 19 Dec 2018 21:34:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 96FED286B2 for ; Wed, 19 Dec 2018 21:34:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8AB7F286B4; Wed, 19 Dec 2018 21:34:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 42939286B2 for ; Wed, 19 Dec 2018 21:34:18 +0000 (UTC) Received: (qmail 29897 invoked by uid 550); 19 Dec 2018 21:34:16 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29865 invoked from network); 19 Dec 2018 21:34:15 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:reply-to:mime-version :content-transfer-encoding; bh=gao6Paz78Wv+S9Y48uFekujRhjWOk2WMhVie8VXtSvs=; b=RqoruNp3Aj0t6Olz38BcPZUI2zTP1E3aLZSl4Kch+Ute9g4+3+9cUga3xAVw/dRDwa fVc3kx65CAJWYrGhq4NvLMlBNq4Yar6HcDKKaztPllm+UFtnD4Nta+pTUin7yMfk4gh+ 05BP+Em8ahLQvI4aYUrtuTZWX8f/y315R2JZEbkK7AqPcNSqMgrXOTjo15L+TVvg4EWL ORSmEnxPIjZ7Q58QfP8TSRyyScu3ucKAVkvFtyeaAsG5fpy7C5QkSyY2vZw3OaYOXedX 5AVNKLiz/IlF8Dl7kl5hjJ+eXTs5Mc0tE8xdznnY/Jug1A6WIWsFOPRvkpz9TE6FJy4c XOhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:reply-to :mime-version:content-transfer-encoding; bh=gao6Paz78Wv+S9Y48uFekujRhjWOk2WMhVie8VXtSvs=; b=EnXRuF4mEbCyPfONPQGRUbz1uJhCGe5Fnc4bm0kzKLs0riKlksjo4ph47Vip6g9l3d 8P2hZMWPi0Fke7INoNYg3ghfBhrWKNJnJVSXM9FxpF8SIKJp1cloXjm4FxFZ3eDS+hFJ 4queANBqb9SxbO6H+ANgJ3OCssG5XWwyuP/X8zD9TpZj/VtylgehB1X2u6+JzMsja22G +EWXIEbf2+/Yygn4PMKA8jka0/bpo4BgHToZxR12l+Da4omi7du30MWr9geO0szijcBy RkuvJe4mgJTQNXey7Nc9TpViGkqL8zyp9CRtbKv7h2PMCVzcpcQpBHViF0dGLbjBxJDN sEkQ== X-Gm-Message-State: AA+aEWaTeQeqnJ8PqpnFCCn4Miiiz59XeqQlObMM1/JulGKfTWQoCzQM g4xH1wAU6nPFFdfuddq+gH8= X-Google-Smtp-Source: AFSGD/U7kEXKUSbUtVkzDdx+CUSIjSN8XwOZn8JMB3LuNWtVD5/pHTGn0qTJ02elVdg7VFo1rt5ZeQ== X-Received: by 2002:a19:f204:: with SMTP id q4mr14260365lfh.133.1545255244431; Wed, 19 Dec 2018 13:34:04 -0800 (PST) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Andy Lutomirski , Matthew Wilcox , Peter Zijlstra , Dave Hansen , Mimi Zohar Cc: igor.stoppa@huawei.com, Nadav Amit , Kees Cook , linux-integrity@vger.kernel.org, kernel-hardening@lists.openwall.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC v2 PATCH 0/12] hardening: statically allocated protected memory Date: Wed, 19 Dec 2018 23:33:26 +0200 Message-Id: <20181219213338.26619-1-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.19.1 MIME-Version: 1.0 X-Virus-Scanned: ClamAV using ClamSMTP Patch-set implementing write-rare memory protection for statically allocated data. Its purpose it to keep data write protected kernel data which is seldom modified. There is no read overhead, however writing requires special operations that are probably unsitable for often-changing data. The use is opt-in, by applying the modifier __wr_after_init to a variable declaration. As the name implies, the write protection kicks in only after init() is completed; before that moment, the data is modifiable in the usual way. Current Limitations: * supports only data which is allocated statically, at build time. * supports only x86_64, other earchitectures need to provide own backend Some notes: - there is a part of generic code which is basically a NOP, but should allow using unconditionally the write protection. It will automatically default to non-protected functionality, if the specific architecture doesn't support write-rare - to avoid the risk of weakening __ro_after_init, __wr_after_init data is in a separate set of pages, and any invocation will confirm that the memory affected falls within this range. rodata_test is modified accordingly, to check also this case. - for now, the patchset addresses only x86_64, as each architecture seems to have own way of dealing with user space. Once a few are implemented, it should be more obvious what code can be refactored as common. - the memset_user() assembly function seems to work, but I'm not too sure it's really ok - I've added a simple example: the protection of ima_policy_flags - the last patch is optional, but it seemed worth to do the refactoring Changelog: v1->v2 * introduce cleaner split between generic and arch code * add x86_64 specific memset_user() * replace kernel-space memset() memcopy() with userspace counterpart * randomize the base address for the alternate map across the entire available address range from user space (128TB - 64TB) * convert BUG() to WARN() * turn verification of written data into debugging option * wr_rcu_assign_pointer() as special case of wr_assign() * example with protection of ima_policy_flags * documentation CC: Andy Lutomirski CC: Nadav Amit CC: Matthew Wilcox CC: Peter Zijlstra CC: Kees Cook CC: Dave Hansen CC: Mimi Zohar CC: linux-integrity@vger.kernel.org CC: kernel-hardening@lists.openwall.com CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org Igor Stoppa (12): [PATCH 01/12] x86_64: memset_user() [PATCH 02/12] __wr_after_init: linker section and label [PATCH 03/12] __wr_after_init: generic header [PATCH 04/12] __wr_after_init: x86_64: __wr_op [PATCH 05/12] __wr_after_init: x86_64: debug writes [PATCH 06/12] __wr_after_init: Documentation: self-protection [PATCH 07/12] __wr_after_init: lkdtm test [PATCH 08/12] rodata_test: refactor tests [PATCH 09/12] rodata_test: add verification for __wr_after_init [PATCH 10/12] __wr_after_init: test write rare functionality [PATCH 11/12] IMA: turn ima_policy_flags into __wr_after_init [PATCH 12/12] x86_64: __clear_user as case of __memset_user Documentation/security/self-protection.rst | 14 ++- arch/Kconfig | 15 +++ arch/x86/Kconfig | 1 + arch/x86/include/asm/uaccess_64.h | 6 + arch/x86/lib/usercopy_64.c | 41 +++++-- arch/x86/mm/Makefile | 2 + arch/x86/mm/prmem.c | 127 +++++++++++++++++++++ drivers/misc/lkdtm/core.c | 3 + drivers/misc/lkdtm/lkdtm.h | 3 + drivers/misc/lkdtm/perms.c | 29 +++++ include/asm-generic/vmlinux.lds.h | 25 +++++ include/linux/cache.h | 21 ++++ include/linux/prmem.h | 142 ++++++++++++++++++++++++ init/main.c | 2 + mm/Kconfig.debug | 16 +++ mm/Makefile | 1 + mm/rodata_test.c | 69 ++++++++---- mm/test_write_rare.c | 135 ++++++++++++++++++++++ security/integrity/ima/ima.h | 3 +- security/integrity/ima/ima_init.c | 5 +- security/integrity/ima/ima_policy.c | 9 +- 21 files changed, 629 insertions(+), 40 deletions(-)