From patchwork Tue Oct 23 21:34:59 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Stoppa X-Patchwork-Id: 10653799 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04BD913A4 for ; Tue, 23 Oct 2018 21:38:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E5B562A1B5 for ; Tue, 23 Oct 2018 21:38:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D9AD42A2FF; Tue, 23 Oct 2018 21:38:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 14F602A1B5 for ; Tue, 23 Oct 2018 21:38:43 +0000 (UTC) Received: (qmail 15438 invoked by uid 550); 23 Oct 2018 21:36:27 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 15410 invoked from network); 23 Oct 2018 21:36:26 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references:reply-to; bh=Fg4OaLOHQqrefdjonA6DL8gKc44WsS6YzuIT3EPXPyc=; b=C9E4LDFEqa5quLrdvuVRMmj43Vc5eOZNvcS2YZIGdDAwvs+LtTqg/8VaP6Fd6nfz00 6Y+Kc9Ygxx4Hsd3pPwMK8bzJDFnCw4vw2zNaOu+VZoue6LSPaLP8Pj9crYs88OlfR1q+ pgboQVBRzc+cTmFCLpSlyQunl71QSdH3MpNbzWIhwCKldHH79R1k1WZ0T6H66boZOdBV n7mvD9IH+lRIsAMMUZZajrAW5p8MC5rRpTADj8KpIgELHPkKjU6yxSVwgCIBwszpr8xl SUuArLW37lu7ZoI1jQzaAXoh/YeWiB2bfrEDO18NdKo8GXSQzCWIeSwUtgS4Hgnm59MO +k8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:reply-to; bh=Fg4OaLOHQqrefdjonA6DL8gKc44WsS6YzuIT3EPXPyc=; b=YuJRC8XNnsLlC49ztEOtTKt8WqfH4pWWtQAtwSxoKr2orAoL6J2n8SLFjEDtCeGFt9 TzQVS1XYwEwLQt/7u0A2rJkVcmsjKHYR6l/JLNxNvQeoeceK2wVeBpBpxztSR8RWuAAX TCKVYSAP5vIB/mF+swAyaUmKfB/SlqIqiVuDKBvzNO514PxUvxvrcyRznHiYGSCPi+6m SKdi17WqPtYD27d6AzLtwyWjrK8rGFJnvQzEoj0c6lu7IQnSiKW37NjKZmCnTCNm0U/U s5o4LCRAKBmIkxdthw+jsW18jL+qUh6/XMo3qBfaBAg1Uu/43Svy8LEF36EEQO76cS1J J8uw== X-Gm-Message-State: ABuFfog6S1IpaRbnM7vaC36AHyi9mfmy/wPc2i8P03vQgcbzOx1wf/jN YE8+LoqkUwcpdOCStJY4SdQ= X-Google-Smtp-Source: ACcGV611Z5tTnO5/5bvrHUOX3goc1TR758Ld/VtLL2Gme2c4YdgEJiORfSTHNppuLgxH+rxf/KtJWg== X-Received: by 2002:ac2:5082:: with SMTP id f2-v6mr12824852lfm.47.1540330574898; Tue, 23 Oct 2018 14:36:14 -0700 (PDT) From: Igor Stoppa X-Google-Original-From: Igor Stoppa To: Mimi Zohar , Kees Cook , Matthew Wilcox , Dave Chinner , James Morris , Michal Hocko , kernel-hardening@lists.openwall.com, linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org Cc: igor.stoppa@huawei.com, Dave Hansen , Jonathan Corbet , Laura Abbott , Greg Kroah-Hartman , Andrew Morton , Masahiro Yamada , Alexey Dobriyan , Pekka Enberg , "Paul E. McKenney" , Lihao Liang , linux-kernel@vger.kernel.org Subject: [PATCH 12/17] prmem: linked list: set alignment Date: Wed, 24 Oct 2018 00:34:59 +0300 Message-Id: <20181023213504.28905-13-igor.stoppa@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181023213504.28905-1-igor.stoppa@huawei.com> References: <20181023213504.28905-1-igor.stoppa@huawei.com> X-Virus-Scanned: ClamAV using ClamSMTP As preparation to using write rare on the nodes of various types of lists, specify that the fields in the basic data structures must be aligned to sizeof(void *) It is meant to ensure that any static allocation will not cross a page boundary, to allow pointers to be updated in one step. Signed-off-by: Igor Stoppa CC: Greg Kroah-Hartman CC: Andrew Morton CC: Masahiro Yamada CC: Alexey Dobriyan CC: Pekka Enberg CC: "Paul E. McKenney" CC: Lihao Liang CC: linux-kernel@vger.kernel.org --- include/linux/types.h | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/include/linux/types.h b/include/linux/types.h index 9834e90aa010..53609bbdcf0f 100644 --- a/include/linux/types.h +++ b/include/linux/types.h @@ -183,17 +183,29 @@ typedef struct { } atomic64_t; #endif +#ifdef CONFIG_PRMEM struct list_head { - struct list_head *next, *prev; -}; + struct list_head *next __aligned(sizeof(void *)); + struct list_head *prev __aligned(sizeof(void *)); +} __aligned(sizeof(void *)); -struct hlist_head { - struct hlist_node *first; +struct hlist_node { + struct hlist_node *next __aligned(sizeof(void *)); + struct hlist_node **pprev __aligned(sizeof(void *)); +} __aligned(sizeof(void *)); +#else +struct list_head { + struct list_head *next, *prev; }; struct hlist_node { struct hlist_node *next, **pprev; }; +#endif + +struct hlist_head { + struct hlist_node *first; +}; struct ustat { __kernel_daddr_t f_tfree;