From patchwork Tue Oct 23 21:34:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653795 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6708A13A4 for ; Tue, 23 Oct 2018 21:38:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 533A22A4FD for ; Tue, 23 Oct 2018 21:38:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 46BF82A506; Tue, 23 Oct 2018 21:38:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id BE6272A4FD for ; Tue, 23 Oct 2018 21:38:15 +0000 (UTC) Received: (qmail 14220 invoked by uid 550); 23 Oct 2018 21:36:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 14161 invoked from network); 23 Oct 2018 21:36:23 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=6qJTNFwbH1TFDI3gbUvOjFkxUhT0SMG0kbjMlNfa96U=; b=A0/h3p8FiatFWDDQDLLr5RaEKmvC2OBV5CByt0aO7U3tkHCDzbEKHcuvyKbqrM79at B3DmjiNeG77aasQEPLLhdRBHG9rpmRDrnVbE6D9rhTdkzgvnT7f6dY1oLcu077JMxoik oRTu7OWVsK8ogbADS7YVE4uEMP7Jx8pQXNI0XrGPOLmrSR9F9jYgv3/3p4O14ws5XB6X gVath+ld78AmukgiCr7IMQoqAAGvV63hUhU06m1gmhZ1As5SPVAUMdI4bAmQmxaGVFaU YKggE6rs4nc+lXTGeyf/x4UDHs0bWe/8c8zeqPEGH0Sf4MdWx3soJY8ZzG/wnD/5ZN6J fWfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:reply-to; bh=6qJTNFwbH1TFDI3gbUvOjFkxUhT0SMG0kbjMlNfa96U=; b=p5MkcNF7ukxw00mYApKFL/JPzo6zsa59Enpysoe0tXXhgU7ENjd88gbRJHRAlDuuRW fWcGongoJRddZEclqaf8IEddVbqyFvwJnvbfGD99p0yLOZ2m7B32+F+hxzBo4G8wtBGp PZa5LeDMrWa3Iza2XmCg+rejvA+MIohyFzUJIGa+4Vs3A9Abt5Lb8fWcRweqWqT6seMp CFAunysL6xLco/hGpbfqdPtg5unw2skF3VNjmEtA2uOPhohM5DybBxSkYim3QdFJO1n/ /sTDpYCLrJFjLyNmtcLbMelgNtLPt94ZJPT70hNNbfBhSugKg2iuo7HEvSPglHFgLdry gaqg== X-Gm-Message-State: ABuFfohWZafyrK1Q1f1+xIW5fV5zCEHa8v+evCnWQZeUqEa6ExcXhWu7 YAEC2lBF3WYkimF/zuHjhzs= X-Google-Smtp-Source: ACcGV60oU4v9dmfm2pngODIU85Xo5lMoQ72mDmXmBqMbWj/N/dPKow1IaFhEzTDAdhPWjJt0crJx8Q== X-Received: by 2002:a19:2102:: with SMTP id h2-v6mr12783512lfh.119.1540330571974; Tue, 23 Oct 2018 14:36:11 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Randy Dunlap , Mike Rapoport , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 10/17] prmem: documentation Date: Wed, 24 Oct 2018 00:34:57 +0300 Message-Id: <20181023213504.28905-11-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> X-Virus-Scanned: ClamAV using ClamSMTP Documentation for protected memory. Topics covered: * static memory allocation * dynamic memory allocation * write-rare Signed-off-by: Igor Stoppa CC: Jonathan Corbet CC: Randy Dunlap CC: Mike Rapoport CC: linux-doc@vger.kernel.org CC: linux-kernel@vger.kernel.org --- Documentation/core-api/index.rst | 1 + Documentation/core-api/prmem.rst | 172 +++++++++++++++++++++++++++++++ MAINTAINERS | 1 + 3 files changed, 174 insertions(+) create mode 100644 Documentation/core-api/prmem.rst diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst index 26b735cefb93..1a90fa878d8d 100644 --- a/Documentation/core-api/index.rst +++ b/Documentation/core-api/index.rst @@ -31,6 +31,7 @@ Core utilities gfp_mask-from-fs-io timekeeping boot-time-mm + prmem Interfaces for kernel debugging =============================== diff --git a/Documentation/core-api/prmem.rst b/Documentation/core-api/prmem.rst new file mode 100644 index 000000000000..16d7edfe327a --- /dev/null +++ b/Documentation/core-api/prmem.rst @@ -0,0 +1,172 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. _prmem: + +Memory Protection +================= + +:Date: October 2018 +:Author: Igor Stoppa + +Foreword +-------- +- In a typical system using some sort of RAM as execution environment, + **all** memory is initially writable. + +- It must be initialized with the appropriate content, be it code or data. + +- Said content typically undergoes modifications, i.e. relocations or + relocation-induced changes. + +- The present document doesn't address such transient. + +- Kernel code is protected at system level and, unlike data, it doesn't + require special attention. + +Protection mechanism +-------------------- + +- When available, the MMU can write protect memory pages that would be + otherwise writable. + +- The protection has page-level granularity. + +- An attempt to overwrite a protected page will trigger an exception. +- **Write protected data must go exclusively to write protected pages** +- **Writable data must go exclusively to writable pages** + +Available protections for kernel data +------------------------------------- + +- **constant** + Labelled as **const**, the data is never supposed to be altered. + It is statically allocated - if it has any memory footprint at all. + The compiler can even optimize it away, where possible, by replacing + references to a **const** with its actual value. + +- **read only after init** + By tagging an otherwise ordinary statically allocated variable with + **__ro_after_init**, it is placed in a special segment that will + become write protected, at the end of the kernel init phase. + The compiler has no notion of this restriction and it will treat any + write operation on such variable as legal. However, assignments that + are attempted after the write protection is in place, will cause + exceptions. + +- **write rare after init** + This can be seen as variant of read only after init, which uses the + tag **__wr_after_init**. It is also limited to statically allocated + memory. It is still possible to alter this type of variables, after + the kernel init phase is complete, however it can be done exclusively + with special functions, instead of the assignment operator. Using the + assignment operator after conclusion of the init phase will still + trigger an exception. It is not possible to transition a certain + variable from __wr_ater_init to a permanent read-only status, at + runtime. + +- **dynamically allocated write-rare / read-only** + After defining a pool, memory can be obtained through it, primarily + through the **pmalloc()** allocator. The exact writability state of the + memory obtained from **pmalloc()** and friends can be configured when + creating the pool. At any point it is possible to transition to a less + permissive write status the memory currently associated to the pool. + Once memory has become read-only, it the only valid operation, beside + reading, is to released it, by destroying the pool it belongs to. + + +Protecting dynamically allocated memory +--------------------------------------- + +When dealing with dynamically allocated memory, three options are + available for configuring its writability state: + +- **Options selected when creating a pool** + When creating the pool, it is possible to choose one of the following: + - **PMALLOC_MODE_RO** + - Writability at allocation time: *WRITABLE* + - Writability at protection time: *NONE* + - **PMALLOC_MODE_WR** + - Writability at allocation time: *WRITABLE* + - Writability at protection time: *WRITE-RARE* + - **PMALLOC_MODE_AUTO_RO** + - Writability at allocation time: + - the latest allocation: *WRITABLE* + - every other allocation: *NONE* + - Writability at protection time: *NONE* + - **PMALLOC_MODE_AUTO_WR** + - Writability at allocation time: + - the latest allocation: *WRITABLE* + - every other allocation: *WRITE-RARE* + - Writability at protection time: *WRITE-RARE* + - **PMALLOC_MODE_START_WR** + - Writability at allocation time: *WRITE-RARE* + - Writability at protection time: *WRITE-RARE* + + **Remarks:** + - The "AUTO" modes perform automatic protection of the content, whenever + the current vmap_area is used up and a new one is allocated. + - At that point, the vmap_area being phased out is protected. + - The size of the vmap_area depends on various parameters. + - It might not be possible to know for sure *when* certain data will + be protected. + - The functionality is provided as tradeoff between hardening and speed. + - Its usefulness depends on the specific use case at hand + - The "START_WR" mode is the only one which provides immediate protection, at the cost of speed. + +- **Protecting the pool** + This is achieved with **pmalloc_protect_pool()** + - Any vmap_area currently in the pool is write-protected according to its initial configuration. + - Any residual space still available from the current vmap_area is lost, as the area is protected. + - **protecting a pool after every allocation will likely be very wasteful** + - Using PMALLOC_MODE_START_WR is likely a better choice. + +- **Upgrading the protection level** + This is achieved with **pmalloc_make_pool_ro()** + - it turns the present content of a write-rare pool into read-only + - can be useful when the content of the memory has settled + + +Caveats +------- +- Freeing of memory is not supported. Pages will be returned to the + system upon destruction of their memory pool. + +- The address range available for vmalloc (and thus for pmalloc too) is + limited, on 32-bit systems. However it shouldn't be an issue, since not + much data is expected to be dynamically allocated and turned into + write-protected. + +- Regarding SMP systems, changing state of pages and altering mappings + requires performing cross-processor synchronizations of page tables. + This is an additional reason for limiting the use of write rare. + +- Not only the pmalloc memory must be protected, but also any reference to + it that might become the target for an attack. The attack would replace + a reference to the protected memory with a reference to some other, + unprotected, memory. + +- The users of rare write must take care of ensuring the atomicity of the + action, respect to the way they use the data being altered; for example, + take a lock before making a copy of the value to modify (if it's + relevant), then alter it, issue the call to rare write and finally + release the lock. Some special scenario might be exempt from the need + for locking, but in general rare-write must be treated as an operation + that can incur into races. + +- pmalloc relies on virtual memory areas and will therefore use more + tlb entries. It still does a better job of it, compared to invoking + vmalloc for each allocation, but it is undeniably less optimized wrt to + TLB use than using the physmap directly, through kmalloc or similar. + + +Utilization +----------- + +**add examples here** + +API +--- + +.. kernel-doc:: include/linux/prmem.h +.. kernel-doc:: mm/prmem.c +.. kernel-doc:: include/linux/prmemextra.h diff --git a/MAINTAINERS b/MAINTAINERS index ea979a5a9ec9..246b1a1cc8bb 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9463,6 +9463,7 @@ F: include/linux/prmemextra.h F: mm/prmem.c F: mm/test_write_rare.c F: mm/test_pmalloc.c +F: Documentation/core-api/prmem.rst MEMORY MANAGEMENT L: linux-mm@kvack.org