From patchwork Tue Oct 23 21:34:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653755 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CACE914BB for ; Tue, 23 Oct 2018 21:37:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B78BF2A3DF for ; Tue, 23 Oct 2018 21:37:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AB8052A417; Tue, 23 Oct 2018 21:37:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 78E502A3DF for ; Tue, 23 Oct 2018 21:37:10 +0000 (UTC) Received: (qmail 13936 invoked by uid 550); 23 Oct 2018 21:36:17 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 13766 invoked from network); 23 Oct 2018 21:36:16 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=9TeBZENTG4lUlljtOFNGnausIJ9loexqClGDhlY00q0=; b=JtoyO3mZ13IGzAbCIgy6S5RYsmImtY6hfg34KQRQ2X3TR1VVri5yBZ/LUjBcD0TzXa 0s8L1kB/YwwSuVWbA3820PquvwCu3SkJCcO1geuaAaOGfUNh9JAvXkHX3kScSRePUVzs LpBgTT1JYuKJ1aT49DAJUFExgxQPahyLwVz4BPku8ogRZBoPGO9RswhALPMjDi8XbttO OjGxJeSt7f0J7GWI7q5EdlapOM+CLE2SPmDyplRRclUZ20Yaw/5nyVtaOA6eSk9/RtMb rtjjIO80ygFKzuE+VuAYK60mF+XSs2WkRVh4yBh7Fgr9V0xTBNIf9zkz+/Qr1wr90r6X Paxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:reply-to; bh=9TeBZENTG4lUlljtOFNGnausIJ9loexqClGDhlY00q0=; b=Jo9/lQ2hgr2NyQ7FQZEWz1YwsBJhUNe2kqd373Esb5rBDK8eWtyA7/JnYlul6ssxdc 5109BF+u8S7HBvm2tAsGFn/IFPYtFmiMmHEoiB1YsqDVjZIJCTcJSRYS+P5qsemhk5ci +kz6ji7viA34lnSOZxjAZHhqLFM+iomvRUxDXgjrYmwdlRimBguzelylKelHqwTg+cMw 3fhMttkwcjJY8O9KroY3n+4bRlKtlx4me3i3b2D6lAhCLBWSX5uJgRxHI7kyHuCR/xPE Yzz9M7nqPqJpqAuT7XkPeRnvTS2A94B8zsXtcvKPhYhmK8IKC1gZn0QdxRWgv3ehYhZj +aBA== X-Gm-Message-State: AGRZ1gKEsrejCN8VoUyZ6W6CGUhPC9i5wLSFCrib51rV6Uy19BTj3/sM qAdnJM2aEdKXbmN4VVq1Nak= X-Google-Smtp-Source: AJdET5fm/SfTwnUQw/H/6N0AMH3Waqo8igwVte/fLnkcWUycwtZsZmlgduLgWV39YuSoalp0uJ0pjw== X-Received: by 2002:a2e:84c5:: with SMTP id q5-v6mr5777858ljh.65.1540330564560; Tue, 23 Oct 2018 14:36:04 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Vlastimil Babka , "Kirill A. Shutemov" , Andrew Morton , Pavel Tatashin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/17] prmem: shorthands for write rare on common types Date: Wed, 24 Oct 2018 00:34:52 +0300 Message-Id: <20181023213504.28905-6-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> X-Virus-Scanned: ClamAV using ClamSMTP Wrappers around the basic write rare functionality, addressing several common data types found in the kernel, allowing to specify the new values through immediates, like constants and defines. Note: The list is not complete and could be expanded. Signed-off-by: Igor Stoppa CC: Michal Hocko CC: Vlastimil Babka CC: "Kirill A. Shutemov" CC: Andrew Morton CC: Pavel Tatashin CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org --- MAINTAINERS | 1 + include/linux/prmemextra.h | 133 +++++++++++++++++++++++++++++++++++++ 2 files changed, 134 insertions(+) create mode 100644 include/linux/prmemextra.h diff --git a/MAINTAINERS b/MAINTAINERS index e566c5d09faf..df7221eca160 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9459,6 +9459,7 @@ M: Igor Stoppa L: kernel-hardening@lists.openwall.com S: Maintained F: include/linux/prmem.h +F: include/linux/prmemextra.h F: mm/prmem.c MEMORY MANAGEMENT diff --git a/include/linux/prmemextra.h b/include/linux/prmemextra.h new file mode 100644 index 000000000000..36995717720e --- /dev/null +++ b/include/linux/prmemextra.h @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * prmemextra.h: Shorthands for write rare of basic data types + * + * (C) Copyright 2018 Huawei Technologies Co. Ltd. + * Author: Igor Stoppa + * + */ + +#ifndef _LINUX_PRMEMEXTRA_H +#define _LINUX_PRMEMEXTRA_H + +#include + +/** + * wr_char - alters a char in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_char(const char *dst, const char val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_short - alters a short in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_short(const short *dst, const short val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_ushort - alters an unsigned short in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_ushort(const unsigned short *dst, const unsigned short val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_int - alters an int in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_int(const int *dst, const int val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_uint - alters an unsigned int in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_uint(const unsigned int *dst, const unsigned int val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_long - alters a long in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_long(const long *dst, const long val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_ulong - alters an unsigned long in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_ulong(const unsigned long *dst, const unsigned long val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_longlong - alters a long long in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_longlong(const long long *dst, const long long val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +/** + * wr_ulonglong - alters an unsigned long long in write rare memory + * @dst: target for write + * @val: new value + * + * Returns true on success, false otherwise. + */ +static __always_inline +bool wr_ulonglong(const unsigned long long *dst, + const unsigned long long val) +{ + return wr_memcpy(dst, &val, sizeof(val)); +} + +#endif