From patchwork Tue Dec 4 12:18:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10711661 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D75D015A6 for ; Tue, 4 Dec 2018 12:18:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C86362A0E0 for ; Tue, 4 Dec 2018 12:18:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BC0FF2A0E7; Tue, 4 Dec 2018 12:18:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3FAE72A0E0 for ; Tue, 4 Dec 2018 12:18:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA8086B6EA3; Tue, 4 Dec 2018 07:18:44 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C02F46B6EA4; Tue, 4 Dec 2018 07:18:44 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B1A0E6B6EA5; Tue, 4 Dec 2018 07:18:44 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lf1-f72.google.com (mail-lf1-f72.google.com [209.85.167.72]) by kanga.kvack.org (Postfix) with ESMTP id 424AB6B6EA4 for ; Tue, 4 Dec 2018 07:18:44 -0500 (EST) Received: by mail-lf1-f72.google.com with SMTP id f16so1921677lfc.3 for ; Tue, 04 Dec 2018 04:18:44 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to:mime-version :content-transfer-encoding; bh=T87JHThg7UO8QeA5V6fJftPPsz9yV1R5uLnw0cNr6IE=; b=cbJp+zwboqVJ/VMV5jQOm4Dfqa2zlBO2rRj0GKmE4fpJHaCcW/rrf8BBiCaS18JwKV J7c8j9UzWSe6LBv6gIJVN9d6K6vMDRjigBPbg7+r9RtT6vyGCg3tMZZTAjNIVt+HZykh P2i0TE628pNuxJGRFNE3jvMh88oE7nDhQWcnVN1+xHJ99DHPUso5Au2VJjRplCWCxSTw o/N0gcrjOhgojPrjNVvP+qBWRCzVoKKxlisq/1sIFjD8L/8jKafJvJGTnoWPdlI/Tyht KkzookahzI32f0sP2Q7o5FSbUiQrr7ucg+wvdMSP0wXBYgqWoEQnrZFbk7/x+g25Dgne 7YsA== X-Gm-Message-State: AA+aEWZLRYj50mquD1RR5bTLdJxUe39tUF/ynb87Ync2wPDnG3UslFgf sPRf2xcnV7p6SQbrUr5BNgGmNU+YEoYwr9qeGLuro/MXI234h1xUB2Ap10U/tohCx5bhkHxGzxp rriF4v4GlxwjQ5m3agdDQ6W1QPuZFSA+mPdRpXw637qgAvy9HvpL59KXDiadWlf8iga/3VE7yto tNiRYxR6TuthDPkHvAqD67CaYI7rDS63ew2qsS47CXek/axLOxHAbuwk3JkrQuUiVWSrUCdLPxZ 82pLE2Ci92+SblCuFHNEJgFHWcxUQlKIlXRsyqP+h5dT1GVhxWQWT6Q6Mg6dIv8B7QKcFysBCE6 hMjsXa3h8gUPnj9lENUl8I4Plis2t2qVjWKnj3ohxeU5Oh9ZcF3PdPAhZnzJTPU0oAzptarudKz n X-Received: by 2002:a2e:851a:: with SMTP id j26-v6mr10924339lji.163.1543925923492; Tue, 04 Dec 2018 04:18:43 -0800 (PST) X-Received: by 2002:a2e:851a:: with SMTP id j26-v6mr10924266lji.163.1543925921854; Tue, 04 Dec 2018 04:18:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543925921; cv=none; d=google.com; s=arc-20160816; b=vhpDMI0RC23v08oRRSr1fCxqn1a+Pj6aU7EYqBm+rd8zl0rq5QE4Elff3vLfl3sB/+ nXrWBSTv05XzzEdTptDoqhIqVIQrPyUA4GDTLfP8WyHUFRrf97OQ1MjUxVw7s/9yNQT2 JNcJR++yrF84+B9QNkTQMcFWOKnctiTUZRwQfHpAdiVHj27zmRQFi4mD+m0h8otJ7O43 YGhiBmXdT1jS9u+8lnOeaPzmGzPQCiBMdUz4qSn6kKhh+dJ14H9BO2f55s2RTmBhV0YL e0xnkZMZ6je7q61UbmM1YRfRT2XcBDkzIW0BL+6jpANdsJ8pa036JxD6QFNN2Y+XCbgO vj0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=T87JHThg7UO8QeA5V6fJftPPsz9yV1R5uLnw0cNr6IE=; b=Kyj1vPc3/U+hR1faxRRrB3CrkFV/ZHjJgppwMIAKP6J1MqGemNtRFQtNsvps3zHkwx htuVqcprdw0HhPjBsLy4aJcFjLffIs5U7mxkV6yQp/hxFcbFNczt+H+SzWzzqiz039kf TXh4DLb5jv/sLLDgd3WthzVcf5wop2odbEvxFU//EnUIT+va2eB5QcUR11kIClGjR1eU ApKxnCy1SbAIa26dj9kFONMs8JTZ5iFsZBRX1qi9cOp5zJNdU3hWF3J8EB7ruw4zW+OO HtTRzJg0xtzJMI/fwpP75ZVzWSqGfqfZljMzYfmFGiW1yglHOfSlZ/zliWdiyP/O6w90 IrJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=TPvhzGMm; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id w25-v6sor9897636ljw.35.2018.12.04.04.18.41 for (Google Transport Security); Tue, 04 Dec 2018 04:18:41 -0800 (PST) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=TPvhzGMm; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; bh=T87JHThg7UO8QeA5V6fJftPPsz9yV1R5uLnw0cNr6IE=; b=TPvhzGMmuAy/Jp0MHlJaRrPY46SLLw9OCFu9x7y/7C3iLUWX6sx/58dG6VOnI9s+ii elacBrNvCv7bx6l6EpThFnTO1lCV5SGekhd1Rar+rEZeAsjGZG1rQgPJNAlpCG71eM/v g8HlTY1izmemrME/+NCDkGwZlyvNt7t/tAL+hhlDXceESY/ixa8cMheISX7krODoVRWi ivJSQ6f77thNBnNHiBa+rk2q22S/tvNKKkKNiS7pFv0accHWmF67oZwooIUS14bReL7d ZiT4VfsSj2pWQPKZc8Iz9bV2UxNpoteV0gCwF5BTUvU65TPJ5woU9c+oJ+7s3SUHY4hq HekA== X-Google-Smtp-Source: AFSGD/XfqQYN3wQyhBnGHV3cFZdJapmpbdPk784/DsTCfqnFrfRkG3qqaeqi2sFbGQh+z65PmwJTYA== X-Received: by 2002:a2e:12d0:: with SMTP id 77-v6mr1051581ljs.132.1543925921312; Tue, 04 Dec 2018 04:18:41 -0800 (PST) Received: from localhost.localdomain (91-156-179-117.elisa-laajakaista.fi. [91.156.179.117]) by smtp.gmail.com with ESMTPSA id h3sm2899653lfj.25.2018.12.04.04.18.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 04 Dec 2018 04:18:40 -0800 (PST) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Andy Lutomirski , Kees Cook , Matthew Wilcox Cc: igor.stoppa@huawei.com, Nadav Amit , Peter Zijlstra , Dave Hansen , linux-integrity@vger.kernel.org, kernel-hardening@lists.openwall.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/6] __wr_after_init: linker section and label Date: Tue, 4 Dec 2018 14:18:00 +0200 Message-Id: <20181204121805.4621-2-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181204121805.4621-1-igor.stoppa@huawei.com> References: <20181204121805.4621-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Introduce a section and a label for statically allocated write rare data. The label is named "__wr_after_init". As the name implies, after the init phase is completed, this section will be modifiable only by invoking write rare functions. The section must take up a set of full pages. Signed-off-by: Igor Stoppa CC: Andy Lutomirski CC: Nadav Amit CC: Matthew Wilcox CC: Peter Zijlstra CC: Kees Cook CC: Dave Hansen CC: linux-integrity@vger.kernel.org CC: kernel-hardening@lists.openwall.com CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- include/asm-generic/vmlinux.lds.h | 20 ++++++++++++++++++++ include/linux/cache.h | 17 +++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 3d7a6a9c2370..b711dbe6999f 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -311,6 +311,25 @@ KEEP(*(__jump_table)) \ __stop___jump_table = .; +/* + * Allow architectures to handle wr_after_init data on their + * own by defining an empty WR_AFTER_INIT_DATA. + * However, it's important that pages containing WR_RARE data do not + * hold anything else, to avoid both accidentally unprotecting something + * that is supposed to stay read-only all the time and also to protect + * something else that is supposed to be writeable all the time. + */ +#ifndef WR_AFTER_INIT_DATA +#define WR_AFTER_INIT_DATA(align) \ + . = ALIGN(PAGE_SIZE); \ + __start_wr_after_init = .; \ + . = ALIGN(align); \ + *(.data..wr_after_init) \ + . = ALIGN(PAGE_SIZE); \ + __end_wr_after_init = .; \ + . = ALIGN(align); +#endif + /* * Allow architectures to handle ro_after_init data on their * own by defining an empty RO_AFTER_INIT_DATA. @@ -332,6 +351,7 @@ __start_rodata = .; \ *(.rodata) *(.rodata.*) \ RO_AFTER_INIT_DATA /* Read only after init */ \ + WR_AFTER_INIT_DATA(align) /* wr after init */ \ KEEP(*(__vermagic)) /* Kernel version magic */ \ . = ALIGN(8); \ __start___tracepoints_ptrs = .; \ diff --git a/include/linux/cache.h b/include/linux/cache.h index 750621e41d1c..9a7e7134b887 100644 --- a/include/linux/cache.h +++ b/include/linux/cache.h @@ -31,6 +31,23 @@ #define __ro_after_init __attribute__((__section__(".data..ro_after_init"))) #endif +/* + * __wr_after_init is used to mark objects that cannot be modified + * directly after init (i.e. after mark_rodata_ro() has been called). + * These objects become effectively read-only, from the perspective of + * performing a direct write, like a variable assignment. + * However, they can be altered through a dedicated function. + * It is intended for those objects which are occasionally modified after + * init, however they are modified so seldomly, that the extra cost from + * the indirect modification is either negligible or worth paying, for the + * sake of the protection gained. + */ +#ifndef __wr_after_init +#define __wr_after_init \ + __attribute__((__section__(".data..wr_after_init"))) +#endif + + #ifndef ____cacheline_aligned #define ____cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES))) #endif