From patchwork Thu Sep 30 22:26:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12529417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27604C4167E for ; Thu, 30 Sep 2021 22:27:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0100C61A61 for ; Thu, 30 Sep 2021 22:27:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346382AbhI3W2y (ORCPT ); Thu, 30 Sep 2021 18:28:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344500AbhI3W2w (ORCPT ); Thu, 30 Sep 2021 18:28:52 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2B1AC06176C for ; Thu, 30 Sep 2021 15:27:09 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id b22so5042484pls.1 for ; Thu, 30 Sep 2021 15:27:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GaXhZfS3SkuPHPb6xTQTxvCajWqyFQ1hefFZoAWnwRc=; b=m/jZ9lSLkAVVF6mTXVVWjFv9AqESJN0KqU12E/BMpoLdOJPPpIkGJTwQV1mwZVxN6G ATyOjcAXqlX8YUbI8bMpmoo0sJ0Um/+GRNCA+oSDXsv3Un+NQjqWYTL5fgoCTts/OQc4 FywvqNC2+qAoY1d8Ed2zjUd3krt+fFcauSdU4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GaXhZfS3SkuPHPb6xTQTxvCajWqyFQ1hefFZoAWnwRc=; b=iTrFEdxvGNoqcsEQUEoMIQHpeuBWhWdScbXQDc+tEKja7Fna5lbhqpZuKwFz8BQSin FNE8fi2FoO65mM2Oz6Ru63riFUnNiodefXTXdOBBXCr91INd0eLnGu+TNScVp0Isw2Q1 ncZOICgsZQWbzP/g9QlEtQccwQnRrKX86PObIUalHaoCIrmv5FkSI5tAc9Hjs3BsMJwU Ztn+a8LsM7SwdVufA69u6xIUOUG5dpNzuQXnqELxjkZnLA0YRg1OMUoqaPPG/++aBH+o w8LCak2pu07UFSAzN4Ye0afsuPIjjJJMKz2/72KXcksrKtGzrwDgIZKXlGyuf/FeBO6I RWwQ== X-Gm-Message-State: AOAM532NBEcRcbIODyJv/Z0dvBacKnzl9gKlXOSr8N1Tw3IPVdgXKd3J bYztRuZrvwr2Ektdy1ux5i1h2w== X-Google-Smtp-Source: ABdhPJzftNnOZom4YbdrJDsQbfPg6hKnTJpKhFYuH780j2abTGY0bRzbD2ok15tI+WMORBPimueYwg== X-Received: by 2002:a17:902:ce8c:b0:13e:6848:b996 with SMTP id f12-20020a170902ce8c00b0013e6848b996mr6365750plg.11.1633040829416; Thu, 30 Sep 2021 15:27:09 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id 67sm310724pfw.89.2021.09.30.15.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:07 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , kernel test robot , Matt Porter , Alexandre Bounine , Jing Xiangfeng , Ira Weiny , Souptick Joarder , "Gustavo A . R . Silva" , John Hubbard , Joe Perches , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Andy Whitcroft , Dwaipayan Ray , Lukas Bulwahn , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Daniel Micay , Dennis Zhou , Tejun Heo , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 1/8] rapidio: Avoid bogus __alloc_size warning Date: Thu, 30 Sep 2021 15:26:57 -0700 Message-Id: <20210930222704.2631604-2-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3560; h=from:subject; bh=d94GdIidGnmnix6Lfr4v5jKawcjolJm1KlCBxnnJy6k=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm22lbILpU3a928XS7AjfDqGHN2fNCgtxXpqL4C vZpYRAqJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5tgAKCRCJcvTf3G3AJoJAEA CkQ4KHKGJZGbYmK+l1h5sRm1VUBh6JfQZa/gzvXJbs5XaNgP24EPw8Dnih4znZJD1ATTwTmsErFD+Q GZ6OxclhvrD/Vl0n+y95WYCyiegLqupRVyVFFRo/b1orpbSJhekIx+wLfu5K82nwfR/z3fqHspd6Jb C25b2/S4YVArCQx5p/IuFXg8MkHWWwVJGUtFz8bBY4Q1tNHUXbXLWxD6TrGhzWXHRTElkTlX6qC8L2 kkoQerLVTHGEpiXZ7dRR87HWTpaWfUDSt7bcTbf3OEIwZMiD2CUSmascMBmaCN9Lm2CYzFbnJj59Z3 QhDesB6qHk3d2G31rFJolLcoutGjK3X+AJiFJX8Q5LkbMNhoM1xPVcrkb1hill82zRek3uIYvFmoal TWtTKlV7b7M04qtOznFW+wyE1HsAIF/XWpHZ/T7dmjAY38+EG/JV8bSiWFJ3DN2/kMH3opOPMPqLm8 7HktqHnvYvAiun7gBTv+lcoY76OVlmWVuhVEqb+NeVsK0ycLvO6PlLnO5x+btj22XbuHnKNCRG4k2S OeBHJ+KPdk9sja34I/vFVHDfMVK5iy4mXCdbe4ip4w3Ov6jjqjsb6epT+yg3BIMUCScGfAsg7fyA0v HNNTdQCNlDpTzp5mOY3uIkDpIfjNhHWJg4S5+XL+hlGt8+9O4d6fE//ZNtVA== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org After adding __alloc_size attributes to the allocators, GCC 9.3 (but not later) may incorrectly evaluate the arguments to check_copy_size(), getting seemingly confused by the size being returned from array_size(). Instead, perform the calculation once, which both makes the code more readable and avoids the bug in GCC. In file included from arch/x86/include/asm/preempt.h:7, from include/linux/preempt.h:78, from include/linux/spinlock.h:55, from include/linux/mm_types.h:9, from include/linux/buildid.h:5, from include/linux/module.h:14, from drivers/rapidio/devices/rio_mport_cdev.c:13: In function 'check_copy_size', inlined from 'copy_from_user' at include/linux/uaccess.h:191:6, inlined from 'rio_mport_transfer_ioctl' at drivers/rapidio/devices/rio_mport_cdev.c:983:6: include/linux/thread_info.h:213:4: error: call to '__bad_copy_to' declared with attribute error: copy destination size is too small 213 | __bad_copy_to(); | ^~~~~~~~~~~~~~~ But the allocation size and the copy size are identical: transfer = vmalloc(array_size(sizeof(*transfer), transaction.count)); if (!transfer) return -ENOMEM; if (unlikely(copy_from_user(transfer, (void __user *)(uintptr_t)transaction.block, array_size(sizeof(*transfer), transaction.count)))) { Reported-by: kernel test robot Link: https://lore.kernel.org/linux-mm/202109091134.FHnRmRxu-lkp@intel.com/ Cc: Matt Porter Cc: Alexandre Bounine Cc: Jing Xiangfeng Cc: Ira Weiny Cc: Souptick Joarder Cc: Gustavo A. R. Silva Signed-off-by: Kees Cook Reviewed-by: John Hubbard Reviewed-by: Gustavo A. R. Silva --- drivers/rapidio/devices/rio_mport_cdev.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c index 94331d999d27..7df466e22282 100644 --- a/drivers/rapidio/devices/rio_mport_cdev.c +++ b/drivers/rapidio/devices/rio_mport_cdev.c @@ -965,6 +965,7 @@ static int rio_mport_transfer_ioctl(struct file *filp, void __user *arg) struct rio_transfer_io *transfer; enum dma_data_direction dir; int i, ret = 0; + size_t size; if (unlikely(copy_from_user(&transaction, arg, sizeof(transaction)))) return -EFAULT; @@ -976,13 +977,14 @@ static int rio_mport_transfer_ioctl(struct file *filp, void __user *arg) priv->md->properties.transfer_mode) == 0) return -ENODEV; - transfer = vmalloc(array_size(sizeof(*transfer), transaction.count)); + size = array_size(sizeof(*transfer), transaction.count); + transfer = vmalloc(size); if (!transfer) return -ENOMEM; if (unlikely(copy_from_user(transfer, (void __user *)(uintptr_t)transaction.block, - array_size(sizeof(*transfer), transaction.count)))) { + size))) { ret = -EFAULT; goto out_free; } @@ -994,8 +996,7 @@ static int rio_mport_transfer_ioctl(struct file *filp, void __user *arg) transaction.sync, dir, &transfer[i]); if (unlikely(copy_to_user((void __user *)(uintptr_t)transaction.block, - transfer, - array_size(sizeof(*transfer), transaction.count)))) + transfer, size))) ret = -EFAULT; out_free: From patchwork Thu Sep 30 22:26:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12529413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BF76C4332F for ; Thu, 30 Sep 2021 22:27:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 447E16128C for ; Thu, 30 Sep 2021 22:27:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343832AbhI3W2w (ORCPT ); Thu, 30 Sep 2021 18:28:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33578 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245172AbhI3W2v (ORCPT ); Thu, 30 Sep 2021 18:28:51 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C1AEC06176E for ; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id n18so7597377pgm.12 for ; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Jqphm/t74nfIK7/mu6AcFgxlnMC07oundR+N8B/zwSg=; b=GUXLeQcPSVkFzaLKSP+/cXpr9g4ODTT5kgAVqgohUTrn+LrD4gNXODt+2jhNlF9iZZ tl3iTlgcEGUFIOH3iAsKDS4CC2Oes6Dpn6Mscm4R6rhzq8DRsx740VlJlGFYNFnnaXje +o6Nbio/K/D8YPClhsnQOpx6974EkOmZr4kbU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Jqphm/t74nfIK7/mu6AcFgxlnMC07oundR+N8B/zwSg=; b=SJYQTmAFsLpDvI3ktZVAqwyMH56hO405Uofi/JgqFGYpr5XdCtxN9RvqnByTGGhMmZ zmiSFOoR7hPR/klRyisn1s35yF5e/QvtrCBFzsXy1dNbqPQYw1NSB5SH9s9PCYYB31zp aRxxGVBlCivhzOX6p/W8gaVZZ1SuHVeeTes3YJcM4o5xsB24ErzA2ZlWTzcGJId3n1RX Ss/Z3iO4KzFf32HG/NcWuW+PBiiiElhEPxqcZH7lPVQEw9T2coh5Jt9QOlBgtr33FHgV xGc69dUlpsiHm4eh9tUK1eFhdvOPdKCZQJDtOn3A01BSAiruJ/jfggfuhs6YZlZ7ApD/ aXBQ== X-Gm-Message-State: AOAM530bUad1BthOP4d3cDFyJIOIe0wyTxkI6HS51cU9KDZmUL9wf8Vm 1UuPqf7NwviUVLbTHubqvg1f6w== X-Google-Smtp-Source: ABdhPJzBwm+VUDbUCaL37snzWxDJ+ABVVniWgQ658F6dS5xXeVT9ih3tezXi6vlT3ZniRVVLYG3I5A== X-Received: by 2002:a62:1c0e:0:b0:44b:e18c:b497 with SMTP id c14-20020a621c0e000000b0044be18cb497mr6680389pfc.2.1633040828055; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id d12sm4224538pgf.19.2021.09.30.15.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:07 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Randy Dunlap , Andy Whitcroft , Christoph Lameter , Daniel Micay , David Rientjes , Dennis Zhou , Dwaipayan Ray , Joe Perches , Joonsoo Kim , Lukas Bulwahn , Pekka Enberg , Tejun Heo , Vlastimil Babka , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 2/8] Compiler Attributes: add __alloc_size() for better bounds checking Date: Thu, 30 Sep 2021 15:26:58 -0700 Message-Id: <20210930222704.2631604-3-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7935; h=from:subject; bh=Lhtx1mFqM2HAjuPryrsJ8/awNqfH8M5mYNjwTuoC5ts=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm2qxr9gbYrncp+jUvaSMHaGkavg9PvHN0Z2ac8 78iBz+aJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5tgAKCRCJcvTf3G3AJjYaD/ 9j21sTOV1qO/ZPTDlWzDz+1O8pgTrpZUsw7H33B4GcXGon4z9Z+sPoibx0mmTGjHLo+d+LC4873uK0 QvfGbBbzic3mc1FPmALk8Dn5czU3RAzzWHSfBVDcvLUuCHU/0gD8bkc7XC/kvnIHNcjb6zOK6bK9Y9 delHC9rSJQ4AK7X1x8hiClJ6LFP0wVDEHhlwdvI8I4+PvF9tleJ5k8Pw8KkLn85AYGUEfdgqUIuKAH rkrUruN3FMNuvvkGzl1fKK+de9Vxsxl1XaxTehkJSPqCiHFnA+zfh77L36DdITiLgNBHGTQuvq3npW 1Dt3zTOVuSzp4WcaEG7Fl+0MnlXyNFt84a5BMsHExLrerYZg7mCS3jkracEiRoBr++Fw1lSu3qnK0v bWI7hdORTKNDZE9x0OWrnW8E6accFNVNr1XJ64nx3qxqkwq1EMHF3Rt/1J5oywDQsgz8b9cL42ZSfJ 7OuTGB7tstyozps+yW935o6TIkVlOtHN7x93a9FmkhZhEePvAnfNai972lg0mnXqhTAoaI71N9er49 BDE7AVuSwn5V5LWR/tZ2lTfn9Utrm9xTg/UYX4rUo0AdnBf3vkfZ+/T260MWOSB3jS/7TvNQbWl4Gd SYHTZ58uRBlYoAVczk4GJcINy9mT6a94ilYwkDLiWqsfGGDKDcAU80ybcsIw== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org GCC and Clang can use the "alloc_size" attribute to better inform the results of __builtin_object_size() (for compile-time constant values). Clang can additionally use alloc_size to inform the results of __builtin_dynamic_object_size() (for run-time values). Because GCC sees the frequent use of struct_size() as an allocator size argument, and notices it can return SIZE_MAX (the overflow indication), it complains about these call sites overflowing (since SIZE_MAX is greater than the default -Walloc-size-larger-than=PTRDIFF_MAX). This isn't helpful since we already know a SIZE_MAX will be caught at run-time (this was an intentional design). To deal with this, we must disable this check as it is both a false positive and redundant. (Clang does not have this warning option.) Unfortunately, just checking the -Wno-alloc-size-larger-than is not sufficient to make the __alloc_size attribute behave correctly under older GCC versions. The attribute itself must be disabled in those situations too, as there appears to be no way to reliably silence the SIZE_MAX constant expression cases for GCC versions less than 9.1: In file included from ./include/linux/resource_ext.h:11, from ./include/linux/pci.h:40, from drivers/net/ethernet/intel/ixgbe/ixgbe.h:9, from drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c:4: In function 'kmalloc_node', inlined from 'ixgbe_alloc_q_vector' at ./include/linux/slab.h:743:9: ./include/linux/slab.h:618:9: error: argument 1 value '18446744073709551615' exceeds maximum object size 9223372036854775807 [-Werror=alloc-size-larger-than=] return __kmalloc_node(size, flags, node); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./include/linux/slab.h: In function 'ixgbe_alloc_q_vector': ./include/linux/slab.h:455:7: note: in a call to allocation function '__kmalloc_node' declared here void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_slab_alignment __malloc; ^~~~~~~~~~~~~~ Specifically: -Wno-alloc-size-larger-than is not correctly handled by GCC < 9.1 https://godbolt.org/z/hqsfG7q84 (doesn't disable) https://godbolt.org/z/P9jdrPTYh (doesn't admit to not knowing about option) https://godbolt.org/z/465TPMWKb (only warns when other warnings appear) -Walloc-size-larger-than=18446744073709551615 is not handled by GCC < 8.2 https://godbolt.org/z/73hh1EPxz (ignores numeric value) Since anything marked with __alloc_size would also qualify for marking with __malloc, just include __malloc along with it to avoid redundant markings. (Suggested by Linus Torvalds.) Finally, make sure checkpatch.pl doesn't get confused about finding the __alloc_size attribute on functions. (Thanks to Joe Perches.) Tested-by: Randy Dunlap Cc: Andy Whitcroft Cc: Christoph Lameter Cc: Daniel Micay Cc: David Rientjes Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Joonsoo Kim Cc: Lukas Bulwahn Cc: Pekka Enberg Cc: Tejun Heo Cc: Vlastimil Babka Signed-off-by: Kees Cook Reviewed-by: Miguel Ojeda --- Makefile | 15 +++++++++++++++ include/linux/compiler-gcc.h | 8 ++++++++ include/linux/compiler_attributes.h | 10 ++++++++++ include/linux/compiler_types.h | 12 ++++++++++++ scripts/checkpatch.pl | 3 ++- 5 files changed, 47 insertions(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 5e7c1d854441..b1a98ac31200 100644 --- a/Makefile +++ b/Makefile @@ -1008,6 +1008,21 @@ ifdef CONFIG_CC_IS_GCC KBUILD_CFLAGS += -Wno-maybe-uninitialized endif +ifdef CONFIG_CC_IS_GCC +# The allocators already balk at large sizes, so silence the compiler +# warnings for bounds checks involving those possible values. While +# -Wno-alloc-size-larger-than would normally be used here, earlier versions +# of gcc (<9.1) weirdly don't handle the option correctly when _other_ +# warnings are produced (?!). Using -Walloc-size-larger-than=SIZE_MAX +# doesn't work (as it is documented to), silently resolving to "0" prior to +# version 9.1 (and producing an error more recently). Numeric values larger +# than PTRDIFF_MAX also don't work prior to version 9.1, which are silently +# ignored, continuing to default to PTRDIFF_MAX. So, left with no other +# choice, we must perform a versioned check to disable this warning. +# https://lore.kernel.org/lkml/20210824115859.187f272f@canb.auug.org.au +KBUILD_CFLAGS += $(call cc-ifversion, -ge, 0901, -Wno-alloc-size-larger-than) +endif + # disable invalid "can't wrap" optimizations for signed / pointers KBUILD_CFLAGS += -fno-strict-overflow diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index bd2b881c6b63..b9d5f9c373a0 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -144,3 +144,11 @@ #else #define __diag_GCC_8(s) #endif + +/* + * Prior to 9.1, -Wno-alloc-size-larger-than (and therefore the "alloc_size" + * attribute) do not work, and must be disabled. + */ +#if GCC_VERSION < 90100 +#undef __alloc_size__ +#endif diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index e6ec63403965..3de06a8fae73 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -33,6 +33,15 @@ #define __aligned(x) __attribute__((__aligned__(x))) #define __aligned_largest __attribute__((__aligned__)) +/* + * Note: do not use this directly. Instead, use __alloc_size() since it is conditionally + * available and includes other attributes. + * + * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-alloc_005fsize-function-attribute + * clang: https://clang.llvm.org/docs/AttributeReference.html#alloc-size + */ +#define __alloc_size__(x, ...) __attribute__((__alloc_size__(x, ## __VA_ARGS__))) + /* * Note: users of __always_inline currently do not write "inline" themselves, * which seems to be required by gcc to apply the attribute according @@ -153,6 +162,7 @@ /* * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-malloc-function-attribute + * clang: https://clang.llvm.org/docs/AttributeReference.html#malloc */ #define __malloc __attribute__((__malloc__)) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index b6ff83a714ca..4f2203c4a257 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -250,6 +250,18 @@ struct ftrace_likely_data { # define __cficanonical #endif +/* + * Any place that could be marked with the "alloc_size" attribute is also + * a place to be marked with the "malloc" attribute. Do this as part of the + * __alloc_size macro to avoid redundant attributes and to avoid missing a + * __malloc marking. + */ +#ifdef __alloc_size__ +# define __alloc_size(x, ...) __alloc_size__(x, ## __VA_ARGS__) __malloc +#else +# define __alloc_size(x, ...) __malloc +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index c27d2312cfc3..88cb294dc447 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -489,7 +489,8 @@ our $Attribute = qr{ ____cacheline_aligned| ____cacheline_aligned_in_smp| ____cacheline_internodealigned_in_smp| - __weak + __weak| + __alloc_size\s*\(\s*\d+\s*(?:,\s*\d+\s*)?\) }x; our $Modifier; our $Inline = qr{inline|__always_inline|noinline|__inline|__inline__}; From patchwork Thu Sep 30 22:26:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12529409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D36E4C433F5 for ; Thu, 30 Sep 2021 22:27:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAC2A61A52 for ; Thu, 30 Sep 2021 22:27:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343780AbhI3W2w (ORCPT ); Thu, 30 Sep 2021 18:28:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230104AbhI3W2v (ORCPT ); Thu, 30 Sep 2021 18:28:51 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C273C06176D for ; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id ce20-20020a17090aff1400b0019f13f6a749so7564085pjb.4 for ; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BT90LCkstOs8lo40mx0B1wAunL8tBqNcKJloICKrjPQ=; b=RC1OHFyjxktnPOJtgBxUBH84zn9fSZ38Zhq8Wi83FTysSaB3pmYPxNijtvte3iAUAH vwX8yXUhSKj4wUx4cTBotgH0tICjEVvlEeCAknggHrsRqJhJEo1YEciktEIQivGeHh75 bXf7j0es7Sa8cOpUBQ9hSh4c3BOebAV2Grn1I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BT90LCkstOs8lo40mx0B1wAunL8tBqNcKJloICKrjPQ=; b=75+jUB/8oSRrSCqRFSy4e/dO4oOEHBfSUxOIbqRrC6VCU237dsJUy2giCyhbRs351E ZLJ7mTIipSGAjYnWIWGhsNIMbYWSrgQQEkB4MHiI5xJ5M8/owngXHf/VY+i++TohqqF5 819TNFRys1ekKobHqzGPWOCtHtB6fdWtjBjav/uSA8OfWUSa0hMl72wXD6BAAgtQqgJG qSFJoQUh3hUeMoDi7qY2LeQEaPFpiTRwRmpc76/2JIx1McIQrTVkprZaYaeNp7ruvd+N r6fUym3TBjskq9lB6k/fxXhtgLDs1ii5ZAsrt1LdOH+mIkfQSC8oDMkfpTU7KRP1ctEK nrvA== X-Gm-Message-State: AOAM533I5+diFRysMkL4a3eZybJTYbpYd9Y23Rkh3z6Amz2DOLMfW0JG AF6H00hwI5mWM/z4hYVoJfzehQ== X-Google-Smtp-Source: ABdhPJyMfjpC76ypDMyp3+NLQz9t5YC7E7dp/eMNGw7umGk0OtlkjaRm0QXZ32bmrT30OgO49awBTA== X-Received: by 2002:a17:90b:3e84:: with SMTP id rj4mr15805970pjb.208.1633040827758; Thu, 30 Sep 2021 15:27:07 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id d5sm3528826pjs.53.2021.09.30.15.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:07 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , linux-mm@kvack.org, Joe Perches , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Andy Whitcroft , Dwaipayan Ray , Lukas Bulwahn , Daniel Micay , Dennis Zhou , Tejun Heo , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 3/8] slab: Clean up function prototypes Date: Thu, 30 Sep 2021 15:26:59 -0700 Message-Id: <20210930222704.2631604-4-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7538; h=from:subject; bh=4zGyVVwporv8ja7RCDSosI43pqvJgeMGlgRJrQqRrvI=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm38v31ZVQqC6PbDKBvc3yzNA+GO2RmSS9QSIVZ CEBfQX6JAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5twAKCRCJcvTf3G3AJmN4D/ 4vNGbbh2hscU2eFW3AiHCopH+L3OHR+xl7Sn5RTZrk1m49xZUHxkzxOJsuxYXhy1A/uknbc29G8NrP KwGpHfPdQ+AjTbaqt5hcMOP7z0ffFK6t6RCGvVu0dBpEMsBk0YDN+lG7xS5aCm9pLgXDCOT5Q/2m30 mtMjTpEVIpRFeLWSpieVomarBGwQ5q2Bv42PWTnUmL8A+iNbY426xt1VkNh67lWl8TfhUKJ6gUd5Nj pLM7NpBQH3P3/0PYamPaJzPA6ShBVCTTc3EIjA/efKRZBDx2fR+8VXRvDgKorjtlh+T6/e/dnpCA4L n1RFBxOnRqGvpOss5V00nWIeJna+DRdZaGWrrDcr7TkT8I6w0NtKlhHmKNwMrrQkJrMl6X2Ebiq4Hz tPBDAVmTdS17XVAGKPhcaBV3krF6NWj9Vtw97hrrY8zQLlDiRW2wPkug5KTrWEkYQXPBK4X8v7bDV4 M2cXzTw6K0meXie2c9UcIR4CYepC5vFj3kfaSmCwUVBb0ZkCehShHV8kqUqxfD9AA2Ahzozh2Vsqc3 P/uTWkwXaEA7ZpL0fagn4IBEdLgKf3pFvNp7yIJkm8pYQPoTcDw3aBQ1t/4nhxk3rRlRFe1IiGTDiM D7eBTHrpDKRzGi1BE3EV+JuvSOzcyI8QnotreEGoS+HGqFBBH5zSzUSD2q5w== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Based on feedback from Joe Perches and Linus Torvalds, regularize the slab function prototypes before making attribute changes. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Vlastimil Babka Cc: linux-mm@kvack.org Signed-off-by: Kees Cook --- include/linux/slab.h | 68 ++++++++++++++++++++++---------------------- 1 file changed, 34 insertions(+), 34 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 083f3ce550bc..d9f14125d7a2 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -152,8 +152,8 @@ struct kmem_cache *kmem_cache_create_usercopy(const char *name, slab_flags_t flags, unsigned int useroffset, unsigned int usersize, void (*ctor)(void *)); -void kmem_cache_destroy(struct kmem_cache *); -int kmem_cache_shrink(struct kmem_cache *); +void kmem_cache_destroy(struct kmem_cache *s); +int kmem_cache_shrink(struct kmem_cache *s); /* * Please use this macro to create slab caches. Simply specify the @@ -181,11 +181,11 @@ int kmem_cache_shrink(struct kmem_cache *); /* * Common kmalloc functions provided by all allocators */ -void * __must_check krealloc(const void *, size_t, gfp_t); -void kfree(const void *); -void kfree_sensitive(const void *); -size_t __ksize(const void *); -size_t ksize(const void *); +void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags); +void kfree(const void *objp); +void kfree_sensitive(const void *objp); +size_t __ksize(const void *objp); +size_t ksize(const void *objp); #ifdef CONFIG_PRINTK bool kmem_valid_obj(void *object); void kmem_dump_obj(void *object); @@ -426,8 +426,8 @@ static __always_inline unsigned int __kmalloc_index(size_t size, #endif /* !CONFIG_SLOB */ void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __malloc; -void *kmem_cache_alloc(struct kmem_cache *, gfp_t flags) __assume_slab_alignment __malloc; -void kmem_cache_free(struct kmem_cache *, void *); +void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; +void kmem_cache_free(struct kmem_cache *s, void *objp); /* * Bulk allocation and freeing operations. These are accelerated in an @@ -436,8 +436,8 @@ void kmem_cache_free(struct kmem_cache *, void *); * * Note that interrupts must be enabled when calling these functions. */ -void kmem_cache_free_bulk(struct kmem_cache *, size_t, void **); -int kmem_cache_alloc_bulk(struct kmem_cache *, gfp_t, size_t, void **); +void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); +int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p); /* * Caller must not use kfree_bulk() on memory not originally allocated @@ -450,7 +450,8 @@ static __always_inline void kfree_bulk(size_t size, void **p) #ifdef CONFIG_NUMA void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __malloc; -void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node) __assume_slab_alignment __malloc; +void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment + __malloc; #else static __always_inline void *__kmalloc_node(size_t size, gfp_t flags, int node) { @@ -464,25 +465,24 @@ static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t f #endif #ifdef CONFIG_TRACING -extern void *kmem_cache_alloc_trace(struct kmem_cache *, gfp_t, size_t) __assume_slab_alignment __malloc; +extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) + __assume_slab_alignment __malloc; #ifdef CONFIG_NUMA -extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) __assume_slab_alignment __malloc; +extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) __assume_slab_alignment __malloc; #else -static __always_inline void * -kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) +static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, + gfp_t gfpflags, int node, + size_t size) { return kmem_cache_alloc_trace(s, gfpflags, size); } #endif /* CONFIG_NUMA */ #else /* CONFIG_TRACING */ -static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, - gfp_t flags, size_t size) +static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, + size_t size) { void *ret = kmem_cache_alloc(s, flags); @@ -490,10 +490,8 @@ static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, return ret; } -static __always_inline void * -kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) +static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) { void *ret = kmem_cache_alloc_node(s, gfpflags, node); @@ -502,13 +500,14 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s, } #endif /* CONFIG_TRACING */ -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __malloc; +extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment + __malloc; #ifdef CONFIG_TRACING -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __malloc; +extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) + __assume_page_alignment __malloc; #else -static __always_inline void * -kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) +static __always_inline void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) { return kmalloc_order(size, flags, order); } @@ -638,8 +637,8 @@ static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) * @new_size: new size of a single member of the array * @flags: the type of memory to allocate (see kmalloc) */ -static __must_check inline void * -krealloc_array(void *p, size_t new_n, size_t new_size, gfp_t flags) +static inline void * __must_check krealloc_array(void *p, size_t new_n, size_t new_size, + gfp_t flags) { size_t bytes; @@ -668,7 +667,7 @@ static inline void *kcalloc(size_t n, size_t size, gfp_t flags) * allocator where we care about the real place the memory allocation * request comes from. */ -extern void *__kmalloc_track_caller(size_t, gfp_t, unsigned long); +extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller); #define kmalloc_track_caller(size, flags) \ __kmalloc_track_caller(size, flags, _RET_IP_) @@ -691,7 +690,8 @@ static inline void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) #ifdef CONFIG_NUMA -extern void *__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long); +extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, + unsigned long caller); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) From patchwork Thu Sep 30 22:27:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12529415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30329C433F5 for ; Thu, 30 Sep 2021 22:27:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BC6D6128C for ; Thu, 30 Sep 2021 22:27:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345120AbhI3W2y (ORCPT ); Thu, 30 Sep 2021 18:28:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344642AbhI3W2x (ORCPT ); Thu, 30 Sep 2021 18:28:53 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E196C06176E for ; Thu, 30 Sep 2021 15:27:10 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id na16-20020a17090b4c1000b0019f5bb661f9so2077424pjb.0 for ; Thu, 30 Sep 2021 15:27:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zygAiLSQoNsoIY1uvxoA1hwJtuo0y1KIlpyKLcao1qw=; b=YGS0FrT8KgFYxIsPD1Vmo9YWUEsTitNyDANqflXGAcsxo5zFbryKfJ3PfKygTk25ql O0fmHhCXQZOedvwacR/EjYGFf81Coe7XD5U+og7DwDF4ETb13Zrj2+ZmMsrzgEJNqYvZ PG1VS5MwQ61xDXx3Kmy6wRy0VfGli/ZIYblIM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zygAiLSQoNsoIY1uvxoA1hwJtuo0y1KIlpyKLcao1qw=; b=KgaFto4CRL9xOWR5vi8kgsaMmZbgjZ2KBixevG/QEoAudCpkmkDqWcLtZusaZVQsWG L79fEJyVMQ6v9Ib/4imzvT6GWN9DTSFQJGSEGc9SUBQM7RNQABGIro61t70NaYwDYssg Z1gp9lgBknkb6VD/c42lRi3Q34ecNG0CpoUMV6sfXksCMkesj3Mfw7Thul6i2pqUdLos eXXa6gPXrtaCxv5P8aqf7cA+Yvk3X1p67dxzUHuMj3FkojmI3Te8dvQEGPiDcliTPOpb CkBXZsqOOm5LGWtYrnTA9wPcuQb4RgT59T4wrIOBxjDrGcPUB1DIMD9VvL58yws/duPZ NXXQ== X-Gm-Message-State: AOAM5317lvS42lUkYBUReLh+RXQUYbydqfm1fsKLyVWYw5S0YzwVIgvE /qlMFWg7RCv6IrOpm8amwb/AZA== X-Google-Smtp-Source: ABdhPJwt90voN5Dif7tX7/zONpBmQ3a4brdmA7e+RYd3OUtCE8kmT7pZTDjrBs2yiQk3xt/RqH38tQ== X-Received: by 2002:a17:90b:4d07:: with SMTP id mw7mr15084160pjb.66.1633040829698; Thu, 30 Sep 2021 15:27:09 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id 127sm4084858pfw.10.2021.09.30.15.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:07 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Andy Whitcroft , Dennis Zhou , Dwaipayan Ray , Joe Perches , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Tejun Heo , Daniel Micay , Nick Desaulniers , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 4/8] slab: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:00 -0700 Message-Id: <20210930222704.2631604-5-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9514; h=from:subject; bh=jz2EabWJ4V7UBTDnXqnj7V0rrqkbx+8deuoqLgEzAyQ=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm312aTAJy2rK6CUEO/3iGG5lrIgRt8I0BPxLQA ws2ETQaJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5twAKCRCJcvTf3G3AJoJsEA C0rp8pQeMkm2vohx8uuRwQGOg6XG6s9CbtQeM4TyD50PjHE/lRZdMz9+1gs0isac2J/MXzvuqkuUrH 33hBSKyFFTWa7OsdpupncwDiNBrgYjdmG5ZDm4KnGt5fmFjoyHsSkoM2m/q9nvk9HNMYCRj28rIsxw thd+1X5Zp4lfCeZcuQ/Rwf89CFtQTYq4tUN0942g3ii5+uVMMVC//dHySTHaSoSsJQgaYF1cD6BtqC O8xOT4G85XQc77P3POwRAKIc25jmPB7KRDuvnA4W2QDZm7bZOd1bosuM1PIUqm2OLm3yDVJHrNqSTL K/RcyNzkD3XM8gCcXCzzquG85b0e6uLU0gtnToglQo8DrMyVMgEjgCBuYIDZsAubJrfRsUDHrG9Q6M yye6Cz/CBZPzzQp0EXnWrBYQiJcHnuaawTJy4zNUVmlQ4R+raTrwRJYUEoL0lavFq0DXTnTwrjy/yT n5jWoUBS+gIqkOrp/BR8PYnrQjmT5sziW4d5YB4BhuNWhj2KXYyy4DEIhAHk7WDdo28MeVg8xlsUZU 6ipMnWY/3LzGFN6RoJ2fdothXH0DrptbevsKo/R0owUDLmZ+DHaWygKTwv1hSBm6k7oGU3CQmxwwbP Ju2UNQEXJS77ak+emdTaVAlpSNgikS3plbXgSD9VtpE3izBIu05Jhfhs+tDg== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for regular kmalloc interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Vlastimil Babka Cc: Andy Whitcroft Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Tejun Heo Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook Reviewed-by: Nick Desaulniers --- include/linux/slab.h | 61 ++++++++++++++++++++++++-------------------- 1 file changed, 33 insertions(+), 28 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index d9f14125d7a2..844b776deecf 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -181,7 +181,7 @@ int kmem_cache_shrink(struct kmem_cache *s); /* * Common kmalloc functions provided by all allocators */ -void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags); +void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __alloc_size(2); void kfree(const void *objp); void kfree_sensitive(const void *objp); size_t __ksize(const void *objp); @@ -425,7 +425,7 @@ static __always_inline unsigned int __kmalloc_index(size_t size, #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ -void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __malloc; +void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; void kmem_cache_free(struct kmem_cache *s, void *objp); @@ -449,11 +449,12 @@ static __always_inline void kfree_bulk(size_t size, void **p) } #ifdef CONFIG_NUMA -void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __malloc; +void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment + __alloc_size(1); void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment __malloc; #else -static __always_inline void *__kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline __alloc_size(1) void *__kmalloc_node(size_t size, gfp_t flags, int node) { return __kmalloc(size, flags); } @@ -466,23 +467,23 @@ static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t f #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) - __assume_slab_alignment __malloc; + __assume_slab_alignment __alloc_size(3); #ifdef CONFIG_NUMA extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t size) __assume_slab_alignment __malloc; + int node, size_t size) __assume_slab_alignment + __alloc_size(4); #else -static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, int node, - size_t size) +static __always_inline __alloc_size(4) void *kmem_cache_alloc_node_trace(struct kmem_cache *s, + gfp_t gfpflags, int node, size_t size) { return kmem_cache_alloc_trace(s, gfpflags, size); } #endif /* CONFIG_NUMA */ #else /* CONFIG_TRACING */ -static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, - size_t size) +static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct kmem_cache *s, + gfp_t flags, size_t size) { void *ret = kmem_cache_alloc(s, flags); @@ -501,19 +502,20 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g #endif /* CONFIG_TRACING */ extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment - __malloc; + __alloc_size(1); #ifdef CONFIG_TRACING extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) - __assume_page_alignment __malloc; + __assume_page_alignment __alloc_size(1); #else -static __always_inline void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) +static __always_inline __alloc_size(1) void *kmalloc_order_trace(size_t size, gfp_t flags, + unsigned int order) { return kmalloc_order(size, flags, order); } #endif -static __always_inline void *kmalloc_large(size_t size, gfp_t flags) +static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t flags) { unsigned int order = get_order(size); return kmalloc_order_trace(size, flags, order); @@ -573,7 +575,7 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) * Try really hard to succeed the allocation but fail * eventually. */ -static __always_inline void *kmalloc(size_t size, gfp_t flags) +static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) { if (__builtin_constant_p(size)) { #ifndef CONFIG_SLOB @@ -595,7 +597,7 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags) return __kmalloc(size, flags); } -static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { #ifndef CONFIG_SLOB if (__builtin_constant_p(size) && @@ -619,7 +621,7 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node) * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *kmalloc_array(size_t n, size_t size, gfp_t flags) { size_t bytes; @@ -637,8 +639,10 @@ static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) * @new_size: new size of a single member of the array * @flags: the type of memory to allocate (see kmalloc) */ -static inline void * __must_check krealloc_array(void *p, size_t new_n, size_t new_size, - gfp_t flags) +static inline __alloc_size(2, 3) void * __must_check krealloc_array(void *p, + size_t new_n, + size_t new_size, + gfp_t flags) { size_t bytes; @@ -654,7 +658,7 @@ static inline void * __must_check krealloc_array(void *p, size_t new_n, size_t n * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline void *kcalloc(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flags) { return kmalloc_array(n, size, flags | __GFP_ZERO); } @@ -667,12 +671,13 @@ static inline void *kcalloc(size_t n, size_t size, gfp_t flags) * allocator where we care about the real place the memory allocation * request comes from. */ -extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller); +extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) + __alloc_size(1); #define kmalloc_track_caller(size, flags) \ __kmalloc_track_caller(size, flags, _RET_IP_) -static inline void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, - int node) +static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, + int node) { size_t bytes; @@ -683,7 +688,7 @@ static inline void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, return __kmalloc_node(bytes, flags, node); } -static inline void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) +static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) { return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); } @@ -691,7 +696,7 @@ static inline void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) #ifdef CONFIG_NUMA extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, - unsigned long caller); + unsigned long caller) __alloc_size(1); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) @@ -716,7 +721,7 @@ static inline void *kmem_cache_zalloc(struct kmem_cache *k, gfp_t flags) * @size: how many bytes of memory are required. * @flags: the type of memory to allocate (see kmalloc). */ -static inline void *kzalloc(size_t size, gfp_t flags) +static inline __alloc_size(1) void *kzalloc(size_t size, gfp_t flags) { return kmalloc(size, flags | __GFP_ZERO); } @@ -727,7 +732,7 @@ static inline void *kzalloc(size_t size, gfp_t flags) * @flags: the type of memory to allocate (see kmalloc). * @node: memory node from which to allocate */ -static inline void *kzalloc_node(size_t size, gfp_t flags, int node) +static inline __alloc_size(1) void *kzalloc_node(size_t size, gfp_t flags, int node) { return kmalloc_node(size, flags | __GFP_ZERO, node); } From patchwork Thu Sep 30 22:27:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12529419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E199AC41535 for ; Thu, 30 Sep 2021 22:27:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE95561A61 for ; Thu, 30 Sep 2021 22:27:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347954AbhI3W2z (ORCPT ); Thu, 30 Sep 2021 18:28:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346105AbhI3W2y (ORCPT ); Thu, 30 Sep 2021 18:28:54 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BF22C061770 for ; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id s11so7620354pgr.11 for ; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1dXnvTeLUCigPMMnfClviJlp/0W0l8ED7JBU+sWSCTQ=; b=bMZ24ZC9xNEa42Gpp5q0OztRhWlyR/ZPd/+MgSdXTcvZCQG7O0+Wdx+raEdQ/4tzNy gQaJq/poDZ3jwv+wiaGs/Td1V8f6U4K6bVJiyR+Rz5hJdDKQY54CKdGwZBd2hCyceMxX ZTVawkl8i/WKLzQSXc3qNgvt1NrL4Lw5chYf8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1dXnvTeLUCigPMMnfClviJlp/0W0l8ED7JBU+sWSCTQ=; b=rvdSqrlCGreGGq067Vk5qdSA3Ok7RGPthjUOybze3ZWLl4u74MMHoO3RHjwCQOyem1 MauK2WDs2iWq9JpB/IrAuDosN192/4jhArs+MEscA5G/7ZvDkvxePqmvuCQK8YVihRf5 1FFAdC/qA25BI7ERGv9QDX61adz/wBb2bPLVzRtkO8LvAE3kIMBt+D7qdgBvirKQKT62 cE/2l/NRelEs2vdqxcsrsDgg6BAICm41N8dnq+AjXzm//+J+gR6zNnDWMwLx27ikfLrz L/px9oUoZiQTWJIirNqrKX54ljaIILIyEK/pRrZB04YRrp1MRIb1xTIUD7So0jS5eOji Nngg== X-Gm-Message-State: AOAM530O0fo2iiU+8Mzl+8qqsSoJPOl558si9XQJiW+EDwUQ5yn8Blq1 wM/X92ibYWh8vkM4F2F0hS+D3A== X-Google-Smtp-Source: ABdhPJyrj4NqgodUUPoi2IgyK2epGmWmMuNVscNW84sxyOH9pkrPwIMAtF1vD9v9H9UnFzvcAhJSxA== X-Received: by 2002:a05:6a00:2355:b0:44c:86c:49f4 with SMTP id j21-20020a056a00235500b0044c086c49f4mr2934102pfj.58.1633040831209; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id z11sm4343540pff.144.2021.09.30.15.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:09 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Andy Whitcroft , Dennis Zhou , Dwaipayan Ray , Joe Perches , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Tejun Heo , Daniel Micay , Nick Desaulniers , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 5/8] mm/kvmalloc: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:01 -0700 Message-Id: <20210930222704.2631604-6-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2910; h=from:subject; bh=SyAqtHelf0a2iaKyecEWVo/GxwzRnhHwreNN0BUqqAM=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm34uKneQvmJxQTing9NsARvbB4YKVUTexQ0P2y 4HvehcGJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5twAKCRCJcvTf3G3AJsMKD/ 9B9MOpTz9EJCrfrNonEyW3Gcj0s3yDYKmzoBbslhRxx5a0fa99XtVE+I2QWPrCjRokk1liutNYQLoo yLJglYmUbLAFF3WWNXOokAxVRQq8fdfavcfOtP/bRUGwY/F5T1hq2GLFPf8Pb9CYi5x6t/xmJhA67j +xXO5cmxVNE3sx3rIfq1SlyI6mmWvUthEwPSh7eresXvI8dGbuowLNZeO2RM8rj0ztdJxOZwCwojE3 3Nh0gLgYdo9dhmk/n0ZSkivI/4ngDHgEKhWqtPGPB9huz6hMX/SLF43qOm2yp0TwGrbG3z8ttQI8LP Dn7M0W254iTtwkG14RIzU6p7JuSjCfka1aYXPm37kCNv05P7qGr4W9bt89PJCHQCNa65eGirF14Ar4 lswTo3DH3SrXOZYViivXsbrYJmqlvorKH9Qi2sc6wnYb3JPmm6L157vJb3nZrJp7h9vpntHisy9ntI wELx7fONPE9NZHbruTuE6ONgalaoV7d3pxMPeGlKvFvhywPhHBW+GIxJ0ngBMLXTBE5//Ox1Yx/pJ3 VVRY6wlDwRQ/ZGk11cdhPFYF8qRhC1VU3fdZQsEEXaWXNBY//nG3Jm91tU4CTuU3FVuWHlWxyjucol zE7IVVEcgE/wXQAoREtVO7sm2dQSi9j/6da9yKArFV5nlaE7fduEFEigworA== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for regular kvmalloc interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Vlastimil Babka Cc: Andy Whitcroft Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Tejun Heo Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook Reviewed-by: Nick Desaulniers --- include/linux/mm.h | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 73a52aba448f..03dfb466d4f5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -799,21 +799,21 @@ static inline int is_vmalloc_or_module_addr(const void *x) } #endif -extern void *kvmalloc_node(size_t size, gfp_t flags, int node); -static inline void *kvmalloc(size_t size, gfp_t flags) +extern void *kvmalloc_node(size_t size, gfp_t flags, int node) __alloc_size(1); +static inline __alloc_size(1) void *kvmalloc(size_t size, gfp_t flags) { return kvmalloc_node(size, flags, NUMA_NO_NODE); } -static inline void *kvzalloc_node(size_t size, gfp_t flags, int node) +static inline __alloc_size(1) void *kvzalloc_node(size_t size, gfp_t flags, int node) { return kvmalloc_node(size, flags | __GFP_ZERO, node); } -static inline void *kvzalloc(size_t size, gfp_t flags) +static inline __alloc_size(1) void *kvzalloc(size_t size, gfp_t flags) { return kvmalloc(size, flags | __GFP_ZERO); } -static inline void *kvmalloc_array(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *kvmalloc_array(size_t n, size_t size, gfp_t flags) { size_t bytes; @@ -823,13 +823,13 @@ static inline void *kvmalloc_array(size_t n, size_t size, gfp_t flags) return kvmalloc(bytes, flags); } -static inline void *kvcalloc(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *kvcalloc(size_t n, size_t size, gfp_t flags) { return kvmalloc_array(n, size, flags | __GFP_ZERO); } -extern void *kvrealloc(const void *p, size_t oldsize, size_t newsize, - gfp_t flags); +extern void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) + __alloc_size(3); extern void kvfree(const void *addr); extern void kvfree_sensitive(const void *addr, size_t len); From patchwork Thu Sep 30 22:27:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12529421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 831ECC07CB1 for ; Thu, 30 Sep 2021 22:27:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6FC656128C for ; Thu, 30 Sep 2021 22:27:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348408AbhI3W25 (ORCPT ); Thu, 30 Sep 2021 18:28:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345428AbhI3W2y (ORCPT ); Thu, 30 Sep 2021 18:28:54 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37BFAC06176A for ; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id 75so7646749pga.3 for ; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ESYAKcpw8CJ6aupot12QGhm/1PtWxxa9SFKXu5/HUJo=; b=HzrZAfthMsoaL00e6u9lKkzh3vUjofafLRs6T1cTPLgrjzNgkKDIjJBULYa+JfDJrZ 4+ypFNtLvatgK4uaFZtRLIZnR3rZX/i871dgEGNC2tykmEZHc9mUiy7z5CttTDHgWDmW h26rGT4LCopjx4sbUvZIRJHxHuCCfnzwpYb5s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ESYAKcpw8CJ6aupot12QGhm/1PtWxxa9SFKXu5/HUJo=; b=5YzIKihBbuHmhIa8iKJnF61LlxCvSEKZdrat5MascMOdqrYGLIeMJGCGBMLfyF55vn zs2E2Ov28PYNlpGRLEAg2G+13lrKlqaAMgMO+IASChU8Qd/Mt1C++mE2cbq46cWPEx1b i+g7/0SHYjwDW8QEzL5dupVb9fYcIVDpnUCjZa+rnRn1Ay4KSNROSwedgLKmuG5Y+JDG K+QDAeFB+/Ne3sqhdsHmr6pSP81BGtw6W8vVmPtvOyzfoiKou8nKrBAuTrtEPfHEXoSl k4ysUEsUjLokAdjsv2WPqDJLc9FdCA3DCYtj8zJMS2o5adde/UNXCTAecVqGuxxY2wpl y4cg== X-Gm-Message-State: AOAM531ex2TPOgRh+RWZt/qfEB2KuKP4uaB4psfwTfKW8iewxLOx2ud7 VHICLPeY/sblqz7QXeCUctY7PQ== X-Google-Smtp-Source: ABdhPJzjKyvNdx2y7W3tZ8k06kf7AdxjQEHIvWHm/wxcPUHyEGBfez366DWf4B1AdFCHRgi3cQWBqw== X-Received: by 2002:a65:62d1:: with SMTP id m17mr6913015pgv.370.1633040830779; Thu, 30 Sep 2021 15:27:10 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id q4sm4066225pfl.50.2021.09.30.15.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:09 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Andy Whitcroft , Christoph Lameter , David Rientjes , Dennis Zhou , Dwaipayan Ray , Joe Perches , Joonsoo Kim , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Pekka Enberg , Tejun Heo , Vlastimil Babka , Daniel Micay , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 6/8] mm/vmalloc: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:02 -0700 Message-Id: <20210930222704.2631604-7-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2969; h=from:subject; bh=PqMG9LUKCcMh794yj62SW/bUvosL4x5yAHHZWjpUkyg=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm34phzSlXdvuTp3+opvclRgdeGSmO+NwyepuYM EN6zSnyJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5twAKCRCJcvTf3G3AJhT0EA CJkQ005XqUiJSOcNF6agctDoJhc+CVk4wZ2PG6HQUVmHJKwFf67w4PZ9g3p1fsjzXDp9eaYZefJFUi 6JHZRF5jjU0UYYfYT+DIAnlZM5F59JkpxVExn9yFX1HQyiWAe5/l+5ivLQmIspyvdgPt0GkAfHmeO/ SAzlS5DsbIsmGTLy3DBecFEarOb1YWkPC2Bv7+6PBHt17TVvlpe3KrKvykV7blZWLbAXP3gFo73aO7 YKXx06jgVvyoFXBmYg1fbqfB/TuIRdafZpd4e4IXdqUM/sz3BfrZ5Sszh4wefHDCoDpNdwCNaq+coA W1Hf6kjXkHCK0W6df8mlNFWzVvlCrt5xS488eSFMmPYbk3ImPlxfMNlYoW21FSY3z2SGV6UMa9UTce PW5Ozc21RqIbDwIc1m/H97hfNvFCH0evpi6086NOBh3kyFU3RbIngjeRpk+C+awlD56ONUjPYf4dwv 8DssvV22U43dK6J18SZPEd1qOnylxxE5blxqg4EpCCncxUm7sQEmCYqMwp4O4BK73wNhCzMt0+Qi2u AWPmBNoHgN9dPBao7l6NRLBvFtVtgn5ojCl6tSMu0ueoGwYkLf2APnPgslcTmsO0a6TiP0U8MDgrpk OD0LxB+07EMcZPI2t5HQuWLuU9N2QxGPHO2KQqts1PVM1qqJ2ZwHw3aPkKWw== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for appropriate vmalloc allocator interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Cc: Andy Whitcroft Cc: Christoph Lameter Cc: David Rientjes Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Joonsoo Kim Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Pekka Enberg Cc: Tejun Heo Cc: Vlastimil Babka Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook --- include/linux/vmalloc.h | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 671d402c3778..0ed56fc10c11 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -136,21 +136,21 @@ static inline void vmalloc_init(void) static inline unsigned long vmalloc_nr_pages(void) { return 0; } #endif -extern void *vmalloc(unsigned long size); -extern void *vzalloc(unsigned long size); -extern void *vmalloc_user(unsigned long size); -extern void *vmalloc_node(unsigned long size, int node); -extern void *vzalloc_node(unsigned long size, int node); -extern void *vmalloc_32(unsigned long size); -extern void *vmalloc_32_user(unsigned long size); -extern void *__vmalloc(unsigned long size, gfp_t gfp_mask); +extern void *vmalloc(unsigned long size) __alloc_size(1); +extern void *vzalloc(unsigned long size) __alloc_size(1); +extern void *vmalloc_user(unsigned long size) __alloc_size(1); +extern void *vmalloc_node(unsigned long size, int node) __alloc_size(1); +extern void *vzalloc_node(unsigned long size, int node) __alloc_size(1); +extern void *vmalloc_32(unsigned long size) __alloc_size(1); +extern void *vmalloc_32_user(unsigned long size) __alloc_size(1); +extern void *__vmalloc(unsigned long size, gfp_t gfp_mask) __alloc_size(1); extern void *__vmalloc_node_range(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, - const void *caller); + const void *caller) __alloc_size(1); void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, - int node, const void *caller); -void *vmalloc_no_huge(unsigned long size); + int node, const void *caller) __alloc_size(1); +void *vmalloc_no_huge(unsigned long size) __alloc_size(1); extern void vfree(const void *addr); extern void vfree_atomic(const void *addr); From patchwork Thu Sep 30 22:27:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12529423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDFE8C433F5 for ; Thu, 30 Sep 2021 22:27:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9B2766128C for ; Thu, 30 Sep 2021 22:27:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348735AbhI3W3C (ORCPT ); Thu, 30 Sep 2021 18:29:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344158AbhI3W2z (ORCPT ); Thu, 30 Sep 2021 18:28:55 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CDCEC06176D for ; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id d4-20020a17090ad98400b0019ece228690so7821196pjv.5 for ; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=u2ub9aDUdGkR7a45M6+RayGW3Mc5Nzpp+BqvE+GQyD4=; b=E4117kGofCjsUGdfSRTGV+cTc7zkGteSQS/YjKtNttFBJetk/lbVRQOWBkP0cou6FX nquNIIKcACgN4NjJK0Dk4cvoGaQO1QOvxc7b+qSmM0G/pcadnqVz3fSeHbawfF/+5SxM W5CUTPB7Lbr4gYH5opiwBxjGFHpaqPuAJWxhM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u2ub9aDUdGkR7a45M6+RayGW3Mc5Nzpp+BqvE+GQyD4=; b=cSoG4XACQEC2swGm28DudaaqSWJBEmwJl6JcLPhGTokfEXgqz58eZhE+eQAjUcKCxk 9DjqHaqF4IylUCjONUS6zVY5D51d0R3BURgl1d9Ty2daw40cRMhOWgeh0kNPKoyAYrYo 5yDQrcekLug75XaqLm3B9+iXcjoX/rIb5pbgE56wYLljWpf1zi2UVMyC2pcPMEwOPMJN ThWswTgy2zLUx7voJLIFohCTN+0XpE+A6tIJ7nKHe5gTxy3pykOeEOMTVf5gV+aLZnL6 NlKMPtXJaxpxfMcUe3bTAqtLsDHskXI8klD/3fbT/Yhf0tCKtRuVLQAcXna2faIHzrZV nO3g== X-Gm-Message-State: AOAM532fqqQVtkwurqmVeemMO31dtkaeiMm6fG21FxKM438PecTRdD2Y floy5GFbCIrrFt6UoT1y8wb67g== X-Google-Smtp-Source: ABdhPJwQdxICM7qve4++qlkIYIKGkfShLBTtRkjuyHcuH/G/SHg9YX/oL/zPE0SniAua+b9zgN40Qw== X-Received: by 2002:a17:90a:47:: with SMTP id 7mr9261738pjb.46.1633040831839; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id l185sm4055195pfd.29.2021.09.30.15.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:09 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Andy Whitcroft , Christoph Lameter , David Rientjes , Dennis Zhou , Dwaipayan Ray , Joe Perches , Joonsoo Kim , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Pekka Enberg , Tejun Heo , Vlastimil Babka , Daniel Micay , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 7/8] mm/page_alloc: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:03 -0700 Message-Id: <20210930222704.2631604-8-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1867; h=from:subject; bh=yfsJRZe5VPlXhNpESupqg/Lx3TB+JxswN2/B9c/1qRE=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm4IH1a0nV/SMwCAsRFQGt/UlwMh4gkN7oMNoN0 ZBYvfZOJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5uAAKCRCJcvTf3G3AJnyND/ 9Jn6MqWe77b4tIIKVMs4JhXxwCeYTw1fq3mDnWNU4lSzNE/x3S6AYm3L/txTVyeheRfpOl7SOWwk70 /CWgalV93SIVwrfyQk3eIHwwaYv5XjUxMKTq9OmuY8shTCHA0Dmu9LfiSuLzV9yZ8WUaBEymsdsauL A3bK8vkp9rzhLE4geifpHgJ2ELbeD+A+KNRW5ojXhXB4SBMsdYtEqRGGSsMURPAyAOIn14yPxBLu4M ILNp4YRWjuhJWMN2YAc262qkmCtJjdPPTiQirjxUttsVUtLbK3BsOqjXu+T/dGyFDCb1+eCMJ0arG5 LR10YyJAmOxmECGqzhaP9h18bHNuGDnHAxpPXDn+AhFwwI/Qw5u7+QnYpaOKUkucflZJbd4KykcidL NxsnbLLexoPG7e3txH9ZWIYh72T0Se7V2u93sp9AhF80AYirhFQw26z3rqV3XeCFQkceyWEaZmiras 5eoD20fPo4Dlt04k/F5h2NaBn/LUiB8YxoMnHv2hVt8Ft/cqmhNRqkI/z5Nux69TO3JMtdRDX67id+ /uJBIcZFdewl3ka+oWfCVb3bpMGdaHVp3+bmLiQGZXR1z6tnH3gUZVUECYU98Kh0AW4LuiNi3A/xGa 0WhDo8ht51lnbE7T8FRhQ7FL8vO50a8MszXtDGPQpVltL0E0RubMc+2FVe6A== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for appropriate page allocator interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Cc: Andy Whitcroft Cc: Christoph Lameter Cc: David Rientjes Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Joonsoo Kim Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Pekka Enberg Cc: Tejun Heo Cc: Vlastimil Babka Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook --- include/linux/gfp.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 55b2ec1f965a..fbd4abc33f24 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -608,9 +608,9 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); extern unsigned long get_zeroed_page(gfp_t gfp_mask); -void *alloc_pages_exact(size_t size, gfp_t gfp_mask); +void *alloc_pages_exact(size_t size, gfp_t gfp_mask) __alloc_size(1); void free_pages_exact(void *virt, size_t size); -void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask); +__meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(1); #define __get_free_page(gfp_mask) \ __get_free_pages((gfp_mask), 0) From patchwork Thu Sep 30 22:27:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12529425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A54BDC433FE for ; Thu, 30 Sep 2021 22:27:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B7A861882 for ; Thu, 30 Sep 2021 22:27:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348456AbhI3W3K (ORCPT ); Thu, 30 Sep 2021 18:29:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348196AbhI3W2z (ORCPT ); Thu, 30 Sep 2021 18:28:55 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAA9CC06176F for ; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id g2so6254814pfc.6 for ; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xTdlLb7GtJ4Tj/wecFW9tRbbhL9s43+6wka+jFobq0s=; b=M2T++KtM+KT2mzH+OBTryDjwFQD7Zafb7dljFTDxRRFvTMYRNqYSZO3d23H0Hp+ARh XnLRQkcyDjS13ZBcgm/XGoaUSFBG/8cz5KRpJ01GC8+dbkS+4tfJappB4jFZpL7ntoAw X4ckafKQK4lBJej4HBhFSlB3Vh+OP4N4E9W5U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xTdlLb7GtJ4Tj/wecFW9tRbbhL9s43+6wka+jFobq0s=; b=2bk3jgfiB2gr0YocSo0TiVU+/1Xlq0VIRVoMGtE6ZXInmoE+dwvk2/1nOv/6icEK0P is3B2NHNNRH4liKSh6wCrcTWhfA3EJcXRW0TI3h+z0gVh4kV6HoNH67KIHwn1I2Ql1RU MkdyfdnK8vfycoL+zcYgNtEOOI5TU7QWkNdHBptZ9Hveh7OybkoHIeg/VJ86JfxU+U4P rxSU3gSMEpUqLVt/4V4V6twSXx7iJTrHH5uMK03hzKm1S3TD8NgblaPYUu1/AwNPX6Wr Y5oDa/ErPnTtb8dtljlq1Kpf78LTG5wfgNYRJS9BcEMSPJbKL9UtLhSsv36iXTIQBDWX B+bw== X-Gm-Message-State: AOAM533FT9oeoY0B8szJCLyjRu/J0F6tRpX5cN5jR91cGkl8TH7RVoil ZrxyYTZWHNFVKP73whyZazMNtg== X-Google-Smtp-Source: ABdhPJyH4kS63+xB0mJe00QsSwG0CrKxoOyE4G+LyWXECWixUVGxAxEZvSJ0YkDdDYwi/3dkJrmLvQ== X-Received: by 2002:a63:20f:: with SMTP id 15mr6733460pgc.319.1633040832346; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id x20sm3606310pjp.48.2021.09.30.15.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:09 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Dennis Zhou , Tejun Heo , Christoph Lameter , Andy Whitcroft , David Rientjes , Dwaipayan Ray , Joe Perches , Joonsoo Kim , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Pekka Enberg , Vlastimil Babka , Daniel Micay , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 8/8] percpu: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:04 -0700 Message-Id: <20210930222704.2631604-9-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2486; h=from:subject; bh=9EqXOckHEyfKAa0MxxJtth0Elvb+UhJZNEFB1gjQABo=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm4aUkC0jHO3VL+EQHu/RyvRmlitX3fuanr+Ryz L48wiCKJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5uAAKCRCJcvTf3G3AJlTMD/ 9+Ge1+7x+F/NL44UJJEUkUcosWJRMsIVppsrgGAuv9sU+uQ4EUIlaCObfvfV8cQNsQ6mP4mW4VfHIz qmPEIPllZ1wNum6F1gGAV/KD8oEqyMXXMWBrbSbRfytOI7+SXvYiHYqBOFm/XXNklvYtz1lyFDHFUt AuMPhD6RyaLWnhEnXwBozDpbP82J42TANIyE76AFDLgdBSimeg3/NR8zMhMF+XU9kj0HP8P7kgyv2v YdPjj/Ee2gnHsydjY8nTzEyeFysOuiPmZal5y8Boj0gtOp2hLAkfizpXoSbdiKg1nG3zfW23FpzxZn gPXrPrXAn+bkDFTWB+HYm+GR29fuTj7b9z3P3PK0BjcvUcdV+phsUJ3RkAy96FW1dnKpp2M3x7wVml RitRdRvoVOsKDR0cll4kuoNbKO4G6KsF+2S6EHlFo1d/5dpcSys9CfUu5+W+aPDmtj8QQqyCLTJ9LH I7l/V0e+cL2qPK3IrqQ0Aee+DfUi3b1AZ13Lfgk9GlgKCd50Ee2dBkA7eq07PAItN4eyPZQrfvvtW4 yL3LBUFUy6u1QQh6VthSfwKccfP9tQTGi8HYX4+ylmAUpE22iEiX5zTe/fortEdfvzvng1owx9T9ts 9oZHuqVczFVK0m77+Cfv6mGvcMoLYzUJc7haSxVUqBrIcta3Q9NEyFol+T8g== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for appropriate percpu allocator interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Note that due to the implementation of the percpu API, this is unlikely to ever actually provide compile-time checking beyond very simple non-SMP builds. But, since they are technically allocators, mark them as such. Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: Andy Whitcroft Cc: David Rientjes Cc: Dwaipayan Ray Cc: Joe Perches Cc: Joonsoo Kim Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Pekka Enberg Cc: Vlastimil Babka Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook Acked-by: Dennis Zhou --- include/linux/percpu.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 5e76af742c80..98a9371133f8 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -123,7 +123,7 @@ extern int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_populate_pte_fn_t populate_pte_fn); #endif -extern void __percpu *__alloc_reserved_percpu(size_t size, size_t align); +extern void __percpu *__alloc_reserved_percpu(size_t size, size_t align) __alloc_size(1); extern bool __is_kernel_percpu_address(unsigned long addr, unsigned long *can_addr); extern bool is_kernel_percpu_address(unsigned long addr); @@ -131,8 +131,8 @@ extern bool is_kernel_percpu_address(unsigned long addr); extern void __init setup_per_cpu_areas(void); #endif -extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp); -extern void __percpu *__alloc_percpu(size_t size, size_t align); +extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) __alloc_size(1); +extern void __percpu *__alloc_percpu(size_t size, size_t align) __alloc_size(1); extern void free_percpu(void __percpu *__pdata); extern phys_addr_t per_cpu_ptr_to_phys(void *addr);