From patchwork Tue Dec 4 12:18:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10711673 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B86FB14BD for ; Tue, 4 Dec 2018 12:19:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9A962A0E0 for ; Tue, 4 Dec 2018 12:19:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9AE0F2A0E7; Tue, 4 Dec 2018 12:19:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DD6312A0E0 for ; Tue, 4 Dec 2018 12:18:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41E0B6B6EA7; Tue, 4 Dec 2018 07:18:50 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3794B6B6EA8; Tue, 4 Dec 2018 07:18:50 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AE7A6B6EA9; Tue, 4 Dec 2018 07:18:50 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-lj1-f199.google.com (mail-lj1-f199.google.com [209.85.208.199]) by kanga.kvack.org (Postfix) with ESMTP id 81BDE6B6EA7 for ; Tue, 4 Dec 2018 07:18:49 -0500 (EST) Received: by mail-lj1-f199.google.com with SMTP id e8-v6so4511590ljg.22 for ; Tue, 04 Dec 2018 04:18:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:reply-to:mime-version :content-transfer-encoding; bh=KelN+/epBNC6XQnXj/EwWYuqT/y6Dkv8jfVNF/vOmp0=; b=tyKI6flNFPChhQgZRFu/GE8gaCD2kB1/qXkf2QHGfWlEurOhT3OyMW9DntbHuuL2Oy SlAoIj4G6X44/QZFvATAqL3OEQKSWrUAvArrdCvZmlCALEFda57IxTPfA2Oarg995WIl U2extsomxsO0sl+o3PcWjhcN94GuUD7EQOLdKqUwhsr3x3411fMX8Z+NbC0GG7fZlqzg a2+pzQTYd4oMKjQvyJaX7tpT1OLzHnXVZ0KeHkmViiYf/TNDAmBFYlkpOAP+iwE+kXeb bPdxB5DYLQXIXvCfJ7dxa4urkJVg+Ybyb3louQU9AiEUm/eBauF3Q6F+AbcpTAYxaUUA 3cBg== X-Gm-Message-State: AA+aEWbY0ojCGtPVGFbyMNwoz8531+zh3I2Av8s/Mm0lK1T7pMCb+7dW Q/gusHb1bx+JXs1MDWuYgAMF+T4PcPacDSuwSLAGfYx3yDuv/51ykaC/uf1zGJ60oyOzQ+owc9M 37KBQBi5loqIqM/rkFfJeTthXzmO0FbgisN+9MqfV8/4LeT/EkPw+NUM0CEgBBc2ppadeuqHeHU MnOmWorL+qY0Awi0Woa0VjOE/OghMPfjsJDdZPjSA0x959oJRqm7tE5y1olxCoyx5P8lpIFuxPg yUJTZenO+OHTa+GUyZLitmDa/ypjYbfEdl8s+DkMRNBShBHdgyi3gYbvf6IptH2pkz/ksYxYEOU cMCZu1ESQODBki1Y+Ep5tH6BBEKDNAL8ickdrdXdSxgCRbeM5HcfyWxnPZ3NrKGuT1JsFfDXpD3 K X-Received: by 2002:a2e:b1ca:: with SMTP id e10-v6mr14011135lja.16.1543925928843; Tue, 04 Dec 2018 04:18:48 -0800 (PST) X-Received: by 2002:a2e:b1ca:: with SMTP id e10-v6mr14011074lja.16.1543925927236; Tue, 04 Dec 2018 04:18:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543925927; cv=none; d=google.com; s=arc-20160816; b=1KjFY+kFWA+eNeCPPiglYoskMhsIunZl/vWvrT50rkTs1ePU8y0tB70i51WPkbxmXH c2M7+aHfMLDK+GfhXTMaBTB9CpcdPDomn5DxtyiIMbrGZHHq9Wj4PE8ZhQ5UVilFJ+cb JMbMnXrvngi73Bue1dxm5ktczZUpEcnUVTmQC4ZpqMkI8RZfVt0QvqsoovH/lAiiFDNm svP45tctO3tF4FQkRMSSB59iM4JpexAbmHxcm1TH9WDYgQ2K7X/mcPijlgJMPDOrooL8 byUewwPA2l4T0lpkkpHRCoEVOsmXeSy9FV8OEeDaYs3Uhei7KITN5t5PFwI6WPo2Bww5 7Sfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=KelN+/epBNC6XQnXj/EwWYuqT/y6Dkv8jfVNF/vOmp0=; b=Pahm2joQdU+8dforQxE4drSHhS6UGghWUr2AyD1WzBoBYz4Fz0HxjskARJj2qp4TVr W80+JC88uRLxvUQF/5FczKLuRL9MngkgbjFS20aKF/vvGBt35hO7+806jsFgr+kbjR4U REx2FefhtgUr1+M6wIyRKG49ElOspB2bm1BeT8ZGUfSwrp0uUtLmBBWKiuWXloqCGOIV wja8rwS+6xAfeoyxckI7Tz53wuYG0uYVRf893Y8MDCR4yXHjQ+RNUbWA2XN4B93JrbaJ +tJMP7mTZHeTWkBVk2iSXDfb0yyLBzlSv58oQ7kVC3ZKAh0+WaDkJ/tDmKyB408xwtCa ZuUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="Ucrw/UqR"; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id o22-v6sor9778807lji.38.2018.12.04.04.18.47 for (Google Transport Security); Tue, 04 Dec 2018 04:18:47 -0800 (PST) Received-SPF: pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="Ucrw/UqR"; spf=pass (google.com: domain of igor.stoppa@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=igor.stoppa@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to :mime-version:content-transfer-encoding; bh=KelN+/epBNC6XQnXj/EwWYuqT/y6Dkv8jfVNF/vOmp0=; b=Ucrw/UqR6jj/kLnoRFUkGZh+bpI5fmBzCwCETF/CLgfnVVGru5oqqJZ+7QhYx5n5Fm Shp9pV7DYoFwZs6KOZIyeT5wTjZS1uNj9k1Pg7deoq7ohIYKI71ZnH5xa+kAYKeNq9sF S6jfWP/cyuzeYLp/XNxveiW8CNlYDHQM0l/ckxNqSLKFHfTn0HNhcur/UMZRXbCJq9M9 9gAZs1sR7tX/jQ6B3WZuVDvxMqJ6T9hcsLJ4zarJBspjLmz0Z2ojq0McRWoy63duE35n rjJpzfvnYHiZd7LYQegCvO1OxGLFBA7aOH7VHMeUFs/MBPoKl8l8Afm50pcAvBUr0fHK 8i4g== X-Google-Smtp-Source: AFSGD/U2cqReuGWQC1ZRgos10lgGbtiWNxDr3WCe9FOfg/C7eVOwLMAuyb7v5shFbtygFYMfaJo1eA== X-Received: by 2002:a2e:4299:: with SMTP id h25-v6mr12224879ljf.5.1543925926664; Tue, 04 Dec 2018 04:18:46 -0800 (PST) Received: from localhost.localdomain (91-156-179-117.elisa-laajakaista.fi. [91.156.179.117]) by smtp.gmail.com with ESMTPSA id h3sm2899653lfj.25.2018.12.04.04.18.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 04 Dec 2018 04:18:46 -0800 (PST) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Andy Lutomirski , Kees Cook , Matthew Wilcox Cc: igor.stoppa@huawei.com, Nadav Amit , Peter Zijlstra , Dave Hansen , linux-integrity@vger.kernel.org, kernel-hardening@lists.openwall.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 5/6] __wr_after_init: test write rare functionality Date: Tue, 4 Dec 2018 14:18:04 +0200 Message-Id: <20181204121805.4621-6-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181204121805.4621-1-igor.stoppa@huawei.com> References: <20181204121805.4621-1-igor.stoppa@huawei.com> Reply-To: Igor Stoppa MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Set of test cases meant to confirm that the write rare functionality works as expected. Signed-off-by: Igor Stoppa CC: Andy Lutomirski CC: Nadav Amit CC: Matthew Wilcox CC: Peter Zijlstra CC: Kees Cook CC: Dave Hansen CC: linux-integrity@vger.kernel.org CC: kernel-hardening@lists.openwall.com CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- include/linux/prmem.h | 7 ++- mm/Kconfig.debug | 9 +++ mm/Makefile | 1 + mm/test_write_rare.c | 135 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 149 insertions(+), 3 deletions(-) create mode 100644 mm/test_write_rare.c diff --git a/include/linux/prmem.h b/include/linux/prmem.h index b0131c1f5dc0..d2492ec24c8c 100644 --- a/include/linux/prmem.h +++ b/include/linux/prmem.h @@ -125,9 +125,10 @@ static inline void *wr_memcpy(void *p, const void *q, __kernel_size_t size) * * It is provided as macro, to match rcu_assign_pointer() */ -#define wr_rcu_assign_pointer(p, v) ({ \ - __wr_op((unsigned long)&p, v, sizeof(p), WR_RCU_ASSIGN_PTR); \ - p; \ +#define wr_rcu_assign_pointer(p, v) ({ \ + __wr_op((unsigned long)&p, (unsigned long)v, sizeof(p), \ + WR_RCU_ASSIGN_PTR); \ + p; \ }) #endif #endif diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 9a7b8b049d04..a26ecbd27aea 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -94,3 +94,12 @@ config DEBUG_RODATA_TEST depends on STRICT_KERNEL_RWX ---help--- This option enables a testcase for the setting rodata read-only. + +config DEBUG_PRMEM_TEST + tristate "Run self test for statically allocated protected memory" + depends on STRICT_KERNEL_RWX + select PRMEM + default n + help + Tries to verify that the protection for statically allocated memory + works correctly and that the memory is effectively protected. diff --git a/mm/Makefile b/mm/Makefile index ef3867c16ce0..8de1d468f4e7 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -59,6 +59,7 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o obj-$(CONFIG_SLOB) += slob.o obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o obj-$(CONFIG_PRMEM) += prmem.o +obj-$(CONFIG_DEBUG_PRMEM_TEST) += test_write_rare.o obj-$(CONFIG_KSM) += ksm.o obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o diff --git a/mm/test_write_rare.c b/mm/test_write_rare.c new file mode 100644 index 000000000000..240cc43793d1 --- /dev/null +++ b/mm/test_write_rare.c @@ -0,0 +1,135 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * test_write_rare.c + * + * (C) Copyright 2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + */ + +#include +#include +#include +#include +#include + +#ifdef pr_fmt +#undef pr_fmt +#endif + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +extern long __start_wr_after_init; +extern long __end_wr_after_init; + +static __wr_after_init int scalar = '0'; +static __wr_after_init u8 array[PAGE_SIZE * 3] __aligned(PAGE_SIZE); + +/* The section must occupy a non-zero number of whole pages */ +static bool test_alignment(void) +{ + unsigned long pstart = (unsigned long)&__start_wr_after_init; + unsigned long pend = (unsigned long)&__end_wr_after_init; + + if (WARN((pstart & ~PAGE_MASK) || (pend & ~PAGE_MASK) || + (pstart >= pend), "Boundaries test failed.")) + return false; + pr_info("Boundaries test passed."); + return true; +} + +static inline bool test_pattern(void) +{ + return (memtst(array, '0', PAGE_SIZE / 2) || + memtst(array + PAGE_SIZE / 2, '1', PAGE_SIZE * 3 / 4) || + memtst(array + PAGE_SIZE * 5 / 4, '0', PAGE_SIZE / 2) || + memtst(array + PAGE_SIZE * 7 / 4, '1', PAGE_SIZE * 3 / 4) || + memtst(array + PAGE_SIZE * 5 / 2, '0', PAGE_SIZE / 2)); +} + +static bool test_wr_memset(void) +{ + int new_val = '1'; + + wr_memset(&scalar, new_val, sizeof(scalar)); + if (WARN(memtst(&scalar, new_val, sizeof(scalar)), + "Scalar write rare memset test failed.")) + return false; + + pr_info("Scalar write rare memset test passed."); + + wr_memset(array, '0', PAGE_SIZE * 3); + if (WARN(memtst(array, '0', PAGE_SIZE * 3), + "Array write rare memset test failed.")) + return false; + + wr_memset(array + PAGE_SIZE / 2, '1', PAGE_SIZE * 2); + if (WARN(memtst(array + PAGE_SIZE / 2, '1', PAGE_SIZE * 2), + "Array write rare memset test failed.")) + return false; + + wr_memset(array + PAGE_SIZE * 5 / 4, '0', PAGE_SIZE / 2); + if (WARN(memtst(array + PAGE_SIZE * 5 / 4, '0', PAGE_SIZE / 2), + "Array write rare memset test failed.")) + return false; + + if (WARN(test_pattern(), "Array write rare memset test failed.")) + return false; + + pr_info("Array write rare memset test passed."); + return true; +} + +static u8 array_1[PAGE_SIZE * 2]; +static u8 array_2[PAGE_SIZE * 2]; + +static bool test_wr_memcpy(void) +{ + int new_val = 0x12345678; + + wr_assign(scalar, new_val); + if (WARN(memcmp(&scalar, &new_val, sizeof(scalar)), + "Scalar write rare memcpy test failed.")) + return false; + pr_info("Scalar write rare memcpy test passed."); + + wr_memset(array, '0', PAGE_SIZE * 3); + memset(array_1, '1', PAGE_SIZE * 2); + memset(array_2, '0', PAGE_SIZE * 2); + wr_memcpy(array + PAGE_SIZE / 2, array_1, PAGE_SIZE * 2); + wr_memcpy(array + PAGE_SIZE * 5 / 4, array_2, PAGE_SIZE / 2); + + if (WARN(test_pattern(), "Array write rare memcpy test failed.")) + return false; + + pr_info("Array write rare memcpy test passed."); + return true; +} + +static __wr_after_init int *dst; +static int reference = 0x54; + +static bool test_wr_rcu_assign_pointer(void) +{ + wr_rcu_assign_pointer(dst, &reference); + return dst == &reference; +} + +static int __init test_static_wr_init_module(void) +{ + pr_info("static write_rare test"); + if (WARN(!(test_alignment() && + test_wr_memset() && + test_wr_memcpy() && + test_wr_rcu_assign_pointer()), + "static rare-write test failed")) + return -EFAULT; + pr_info("static write_rare test passed"); + return 0; +} + +module_init(test_static_wr_init_module); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Igor Stoppa "); +MODULE_DESCRIPTION("Test module for static write rare.");