From patchwork Thu Sep 30 22:26:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12538023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24D88C4332F for ; Thu, 30 Sep 2021 22:27:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 036C261882 for ; Thu, 30 Sep 2021 22:27:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345231AbhI3W2y (ORCPT ); Thu, 30 Sep 2021 18:28:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344158AbhI3W2w (ORCPT ); Thu, 30 Sep 2021 18:28:52 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2EF2C06176D for ; Thu, 30 Sep 2021 15:27:09 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id ce20-20020a17090aff1400b0019f13f6a749so7564115pjb.4 for ; Thu, 30 Sep 2021 15:27:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GaXhZfS3SkuPHPb6xTQTxvCajWqyFQ1hefFZoAWnwRc=; b=m/jZ9lSLkAVVF6mTXVVWjFv9AqESJN0KqU12E/BMpoLdOJPPpIkGJTwQV1mwZVxN6G ATyOjcAXqlX8YUbI8bMpmoo0sJ0Um/+GRNCA+oSDXsv3Un+NQjqWYTL5fgoCTts/OQc4 FywvqNC2+qAoY1d8Ed2zjUd3krt+fFcauSdU4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GaXhZfS3SkuPHPb6xTQTxvCajWqyFQ1hefFZoAWnwRc=; b=lxEmxRBT98VXU+2ZQBKLwDjmkkZfNJ1k31unlBKthX+K3SZXVW2jMlyur6gl+hAgP3 8228f1Rmack1qZykUFcRJFOdLjgud+kaIn4+P5vQTIrC2PUcGS+uTaHiFtCVGSykviIp rWeCx1xFxm2Vg07/fAnTCwgSU+KaLDNS3ervewu4oa8FkIMYXmwWoACYY6RnkD9Zi8n9 5N/Tnaa0stbFasAXUoYKRktS78W3m0fkPAc4Uv2+T8oZGqwz9RudtUtVT/XlG3xknJ+c lBPIuBWVWYzDPWCg3nE6VXgkLqY9hZJE7eulP8pIZvMttR+k2nAkIc/z9AdD/AcyoWMh hWYw== X-Gm-Message-State: AOAM533Dx6opAkMa+RUBROqNod0CKi58HjbIIeiOA7vhDqxoBXpNdzaL pDhf+2f8KtrGHnHq8ydEw367pA== X-Google-Smtp-Source: ABdhPJzftNnOZom4YbdrJDsQbfPg6hKnTJpKhFYuH780j2abTGY0bRzbD2ok15tI+WMORBPimueYwg== X-Received: by 2002:a17:902:ce8c:b0:13e:6848:b996 with SMTP id f12-20020a170902ce8c00b0013e6848b996mr6365750plg.11.1633040829416; Thu, 30 Sep 2021 15:27:09 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id 67sm310724pfw.89.2021.09.30.15.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:07 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , kernel test robot , Matt Porter , Alexandre Bounine , Jing Xiangfeng , Ira Weiny , Souptick Joarder , "Gustavo A . R . Silva" , John Hubbard , Joe Perches , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Andy Whitcroft , Dwaipayan Ray , Lukas Bulwahn , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Daniel Micay , Dennis Zhou , Tejun Heo , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 1/8] rapidio: Avoid bogus __alloc_size warning Date: Thu, 30 Sep 2021 15:26:57 -0700 Message-Id: <20210930222704.2631604-2-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3560; h=from:subject; bh=d94GdIidGnmnix6Lfr4v5jKawcjolJm1KlCBxnnJy6k=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm22lbILpU3a928XS7AjfDqGHN2fNCgtxXpqL4C vZpYRAqJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5tgAKCRCJcvTf3G3AJoJAEA CkQ4KHKGJZGbYmK+l1h5sRm1VUBh6JfQZa/gzvXJbs5XaNgP24EPw8Dnih4znZJD1ATTwTmsErFD+Q GZ6OxclhvrD/Vl0n+y95WYCyiegLqupRVyVFFRo/b1orpbSJhekIx+wLfu5K82nwfR/z3fqHspd6Jb C25b2/S4YVArCQx5p/IuFXg8MkHWWwVJGUtFz8bBY4Q1tNHUXbXLWxD6TrGhzWXHRTElkTlX6qC8L2 kkoQerLVTHGEpiXZ7dRR87HWTpaWfUDSt7bcTbf3OEIwZMiD2CUSmascMBmaCN9Lm2CYzFbnJj59Z3 QhDesB6qHk3d2G31rFJolLcoutGjK3X+AJiFJX8Q5LkbMNhoM1xPVcrkb1hill82zRek3uIYvFmoal TWtTKlV7b7M04qtOznFW+wyE1HsAIF/XWpHZ/T7dmjAY38+EG/JV8bSiWFJ3DN2/kMH3opOPMPqLm8 7HktqHnvYvAiun7gBTv+lcoY76OVlmWVuhVEqb+NeVsK0ycLvO6PlLnO5x+btj22XbuHnKNCRG4k2S OeBHJ+KPdk9sja34I/vFVHDfMVK5iy4mXCdbe4ip4w3Ov6jjqjsb6epT+yg3BIMUCScGfAsg7fyA0v HNNTdQCNlDpTzp5mOY3uIkDpIfjNhHWJg4S5+XL+hlGt8+9O4d6fE//ZNtVA== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org After adding __alloc_size attributes to the allocators, GCC 9.3 (but not later) may incorrectly evaluate the arguments to check_copy_size(), getting seemingly confused by the size being returned from array_size(). Instead, perform the calculation once, which both makes the code more readable and avoids the bug in GCC. In file included from arch/x86/include/asm/preempt.h:7, from include/linux/preempt.h:78, from include/linux/spinlock.h:55, from include/linux/mm_types.h:9, from include/linux/buildid.h:5, from include/linux/module.h:14, from drivers/rapidio/devices/rio_mport_cdev.c:13: In function 'check_copy_size', inlined from 'copy_from_user' at include/linux/uaccess.h:191:6, inlined from 'rio_mport_transfer_ioctl' at drivers/rapidio/devices/rio_mport_cdev.c:983:6: include/linux/thread_info.h:213:4: error: call to '__bad_copy_to' declared with attribute error: copy destination size is too small 213 | __bad_copy_to(); | ^~~~~~~~~~~~~~~ But the allocation size and the copy size are identical: transfer = vmalloc(array_size(sizeof(*transfer), transaction.count)); if (!transfer) return -ENOMEM; if (unlikely(copy_from_user(transfer, (void __user *)(uintptr_t)transaction.block, array_size(sizeof(*transfer), transaction.count)))) { Reported-by: kernel test robot Link: https://lore.kernel.org/linux-mm/202109091134.FHnRmRxu-lkp@intel.com/ Cc: Matt Porter Cc: Alexandre Bounine Cc: Jing Xiangfeng Cc: Ira Weiny Cc: Souptick Joarder Cc: Gustavo A. R. Silva Signed-off-by: Kees Cook Reviewed-by: John Hubbard Reviewed-by: Gustavo A. R. Silva --- drivers/rapidio/devices/rio_mport_cdev.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c index 94331d999d27..7df466e22282 100644 --- a/drivers/rapidio/devices/rio_mport_cdev.c +++ b/drivers/rapidio/devices/rio_mport_cdev.c @@ -965,6 +965,7 @@ static int rio_mport_transfer_ioctl(struct file *filp, void __user *arg) struct rio_transfer_io *transfer; enum dma_data_direction dir; int i, ret = 0; + size_t size; if (unlikely(copy_from_user(&transaction, arg, sizeof(transaction)))) return -EFAULT; @@ -976,13 +977,14 @@ static int rio_mport_transfer_ioctl(struct file *filp, void __user *arg) priv->md->properties.transfer_mode) == 0) return -ENODEV; - transfer = vmalloc(array_size(sizeof(*transfer), transaction.count)); + size = array_size(sizeof(*transfer), transaction.count); + transfer = vmalloc(size); if (!transfer) return -ENOMEM; if (unlikely(copy_from_user(transfer, (void __user *)(uintptr_t)transaction.block, - array_size(sizeof(*transfer), transaction.count)))) { + size))) { ret = -EFAULT; goto out_free; } @@ -994,8 +996,7 @@ static int rio_mport_transfer_ioctl(struct file *filp, void __user *arg) transaction.sync, dir, &transfer[i]); if (unlikely(copy_to_user((void __user *)(uintptr_t)transaction.block, - transfer, - array_size(sizeof(*transfer), transaction.count)))) + transfer, size))) ret = -EFAULT; out_free: From patchwork Thu Sep 30 22:26:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12538019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DB32C433FE for ; Thu, 30 Sep 2021 22:27:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2327F61A52 for ; Thu, 30 Sep 2021 22:27:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343919AbhI3W2w (ORCPT ); Thu, 30 Sep 2021 18:28:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33582 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343536AbhI3W2v (ORCPT ); Thu, 30 Sep 2021 18:28:51 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 95A8FC061770 for ; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id k24so7630613pgh.8 for ; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Jqphm/t74nfIK7/mu6AcFgxlnMC07oundR+N8B/zwSg=; b=GUXLeQcPSVkFzaLKSP+/cXpr9g4ODTT5kgAVqgohUTrn+LrD4gNXODt+2jhNlF9iZZ tl3iTlgcEGUFIOH3iAsKDS4CC2Oes6Dpn6Mscm4R6rhzq8DRsx740VlJlGFYNFnnaXje +o6Nbio/K/D8YPClhsnQOpx6974EkOmZr4kbU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Jqphm/t74nfIK7/mu6AcFgxlnMC07oundR+N8B/zwSg=; b=Liik35RnUvbCnXZFrHVotizK8TbQI/4uKhBwg6e/72YJktspHV2zj9MqG8KrBFSBXn j2Excy+lBuDPbtrtnCqtGCqViidjtMeQEUBqmLMTcJJ4vExA3F7DeJB3xiOJwUE51noi gQAoTwqICorRvHMkRd7T95z1Mkp8oU9K/h/KmePEBu14/O3FmhOKkaP9ngC/Hv1shgQ5 3iv5UDqDSERLNMcbMUvd9zUe/mojyqOaJfYq7R3HYzUhq6P37cVdVpi4QcZsFP33hy4g N2LeJtMoDks7mvn59U/0d/e+PrpHXXbtcSx9G8BJFManriXsmSRe6pD8C1bO3wnPHy42 21mw== X-Gm-Message-State: AOAM532qOUdPwUQ+Xw81pvCQ49frPr+95GqVtW6KboZTiMUN94BCoWPy 24N+a6S7GAU1TN68meLE8MFGlA== X-Google-Smtp-Source: ABdhPJzBwm+VUDbUCaL37snzWxDJ+ABVVniWgQ658F6dS5xXeVT9ih3tezXi6vlT3ZniRVVLYG3I5A== X-Received: by 2002:a62:1c0e:0:b0:44b:e18c:b497 with SMTP id c14-20020a621c0e000000b0044be18cb497mr6680389pfc.2.1633040828055; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id d12sm4224538pgf.19.2021.09.30.15.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:07 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Randy Dunlap , Andy Whitcroft , Christoph Lameter , Daniel Micay , David Rientjes , Dennis Zhou , Dwaipayan Ray , Joe Perches , Joonsoo Kim , Lukas Bulwahn , Pekka Enberg , Tejun Heo , Vlastimil Babka , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 2/8] Compiler Attributes: add __alloc_size() for better bounds checking Date: Thu, 30 Sep 2021 15:26:58 -0700 Message-Id: <20210930222704.2631604-3-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7935; h=from:subject; bh=Lhtx1mFqM2HAjuPryrsJ8/awNqfH8M5mYNjwTuoC5ts=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm2qxr9gbYrncp+jUvaSMHaGkavg9PvHN0Z2ac8 78iBz+aJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5tgAKCRCJcvTf3G3AJjYaD/ 9j21sTOV1qO/ZPTDlWzDz+1O8pgTrpZUsw7H33B4GcXGon4z9Z+sPoibx0mmTGjHLo+d+LC4873uK0 QvfGbBbzic3mc1FPmALk8Dn5czU3RAzzWHSfBVDcvLUuCHU/0gD8bkc7XC/kvnIHNcjb6zOK6bK9Y9 delHC9rSJQ4AK7X1x8hiClJ6LFP0wVDEHhlwdvI8I4+PvF9tleJ5k8Pw8KkLn85AYGUEfdgqUIuKAH rkrUruN3FMNuvvkGzl1fKK+de9Vxsxl1XaxTehkJSPqCiHFnA+zfh77L36DdITiLgNBHGTQuvq3npW 1Dt3zTOVuSzp4WcaEG7Fl+0MnlXyNFt84a5BMsHExLrerYZg7mCS3jkracEiRoBr++Fw1lSu3qnK0v bWI7hdORTKNDZE9x0OWrnW8E6accFNVNr1XJ64nx3qxqkwq1EMHF3Rt/1J5oywDQsgz8b9cL42ZSfJ 7OuTGB7tstyozps+yW935o6TIkVlOtHN7x93a9FmkhZhEePvAnfNai972lg0mnXqhTAoaI71N9er49 BDE7AVuSwn5V5LWR/tZ2lTfn9Utrm9xTg/UYX4rUo0AdnBf3vkfZ+/T260MWOSB3jS/7TvNQbWl4Gd SYHTZ58uRBlYoAVczk4GJcINy9mT6a94ilYwkDLiWqsfGGDKDcAU80ybcsIw== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org GCC and Clang can use the "alloc_size" attribute to better inform the results of __builtin_object_size() (for compile-time constant values). Clang can additionally use alloc_size to inform the results of __builtin_dynamic_object_size() (for run-time values). Because GCC sees the frequent use of struct_size() as an allocator size argument, and notices it can return SIZE_MAX (the overflow indication), it complains about these call sites overflowing (since SIZE_MAX is greater than the default -Walloc-size-larger-than=PTRDIFF_MAX). This isn't helpful since we already know a SIZE_MAX will be caught at run-time (this was an intentional design). To deal with this, we must disable this check as it is both a false positive and redundant. (Clang does not have this warning option.) Unfortunately, just checking the -Wno-alloc-size-larger-than is not sufficient to make the __alloc_size attribute behave correctly under older GCC versions. The attribute itself must be disabled in those situations too, as there appears to be no way to reliably silence the SIZE_MAX constant expression cases for GCC versions less than 9.1: In file included from ./include/linux/resource_ext.h:11, from ./include/linux/pci.h:40, from drivers/net/ethernet/intel/ixgbe/ixgbe.h:9, from drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c:4: In function 'kmalloc_node', inlined from 'ixgbe_alloc_q_vector' at ./include/linux/slab.h:743:9: ./include/linux/slab.h:618:9: error: argument 1 value '18446744073709551615' exceeds maximum object size 9223372036854775807 [-Werror=alloc-size-larger-than=] return __kmalloc_node(size, flags, node); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./include/linux/slab.h: In function 'ixgbe_alloc_q_vector': ./include/linux/slab.h:455:7: note: in a call to allocation function '__kmalloc_node' declared here void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_slab_alignment __malloc; ^~~~~~~~~~~~~~ Specifically: -Wno-alloc-size-larger-than is not correctly handled by GCC < 9.1 https://godbolt.org/z/hqsfG7q84 (doesn't disable) https://godbolt.org/z/P9jdrPTYh (doesn't admit to not knowing about option) https://godbolt.org/z/465TPMWKb (only warns when other warnings appear) -Walloc-size-larger-than=18446744073709551615 is not handled by GCC < 8.2 https://godbolt.org/z/73hh1EPxz (ignores numeric value) Since anything marked with __alloc_size would also qualify for marking with __malloc, just include __malloc along with it to avoid redundant markings. (Suggested by Linus Torvalds.) Finally, make sure checkpatch.pl doesn't get confused about finding the __alloc_size attribute on functions. (Thanks to Joe Perches.) Tested-by: Randy Dunlap Cc: Andy Whitcroft Cc: Christoph Lameter Cc: Daniel Micay Cc: David Rientjes Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Joonsoo Kim Cc: Lukas Bulwahn Cc: Pekka Enberg Cc: Tejun Heo Cc: Vlastimil Babka Signed-off-by: Kees Cook Reviewed-by: Miguel Ojeda --- Makefile | 15 +++++++++++++++ include/linux/compiler-gcc.h | 8 ++++++++ include/linux/compiler_attributes.h | 10 ++++++++++ include/linux/compiler_types.h | 12 ++++++++++++ scripts/checkpatch.pl | 3 ++- 5 files changed, 47 insertions(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 5e7c1d854441..b1a98ac31200 100644 --- a/Makefile +++ b/Makefile @@ -1008,6 +1008,21 @@ ifdef CONFIG_CC_IS_GCC KBUILD_CFLAGS += -Wno-maybe-uninitialized endif +ifdef CONFIG_CC_IS_GCC +# The allocators already balk at large sizes, so silence the compiler +# warnings for bounds checks involving those possible values. While +# -Wno-alloc-size-larger-than would normally be used here, earlier versions +# of gcc (<9.1) weirdly don't handle the option correctly when _other_ +# warnings are produced (?!). Using -Walloc-size-larger-than=SIZE_MAX +# doesn't work (as it is documented to), silently resolving to "0" prior to +# version 9.1 (and producing an error more recently). Numeric values larger +# than PTRDIFF_MAX also don't work prior to version 9.1, which are silently +# ignored, continuing to default to PTRDIFF_MAX. So, left with no other +# choice, we must perform a versioned check to disable this warning. +# https://lore.kernel.org/lkml/20210824115859.187f272f@canb.auug.org.au +KBUILD_CFLAGS += $(call cc-ifversion, -ge, 0901, -Wno-alloc-size-larger-than) +endif + # disable invalid "can't wrap" optimizations for signed / pointers KBUILD_CFLAGS += -fno-strict-overflow diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index bd2b881c6b63..b9d5f9c373a0 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -144,3 +144,11 @@ #else #define __diag_GCC_8(s) #endif + +/* + * Prior to 9.1, -Wno-alloc-size-larger-than (and therefore the "alloc_size" + * attribute) do not work, and must be disabled. + */ +#if GCC_VERSION < 90100 +#undef __alloc_size__ +#endif diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index e6ec63403965..3de06a8fae73 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -33,6 +33,15 @@ #define __aligned(x) __attribute__((__aligned__(x))) #define __aligned_largest __attribute__((__aligned__)) +/* + * Note: do not use this directly. Instead, use __alloc_size() since it is conditionally + * available and includes other attributes. + * + * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-alloc_005fsize-function-attribute + * clang: https://clang.llvm.org/docs/AttributeReference.html#alloc-size + */ +#define __alloc_size__(x, ...) __attribute__((__alloc_size__(x, ## __VA_ARGS__))) + /* * Note: users of __always_inline currently do not write "inline" themselves, * which seems to be required by gcc to apply the attribute according @@ -153,6 +162,7 @@ /* * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-malloc-function-attribute + * clang: https://clang.llvm.org/docs/AttributeReference.html#malloc */ #define __malloc __attribute__((__malloc__)) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index b6ff83a714ca..4f2203c4a257 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -250,6 +250,18 @@ struct ftrace_likely_data { # define __cficanonical #endif +/* + * Any place that could be marked with the "alloc_size" attribute is also + * a place to be marked with the "malloc" attribute. Do this as part of the + * __alloc_size macro to avoid redundant attributes and to avoid missing a + * __malloc marking. + */ +#ifdef __alloc_size__ +# define __alloc_size(x, ...) __alloc_size__(x, ## __VA_ARGS__) __malloc +#else +# define __alloc_size(x, ...) __malloc +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index c27d2312cfc3..88cb294dc447 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -489,7 +489,8 @@ our $Attribute = qr{ ____cacheline_aligned| ____cacheline_aligned_in_smp| ____cacheline_internodealigned_in_smp| - __weak + __weak| + __alloc_size\s*\(\s*\d+\s*(?:,\s*\d+\s*)?\) }x; our $Modifier; our $Inline = qr{inline|__always_inline|noinline|__inline|__inline__}; From patchwork Thu Sep 30 22:26:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12538015 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3356C433EF for ; Thu, 30 Sep 2021 22:27:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 745166128C for ; Thu, 30 Sep 2021 22:27:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343706AbhI3W2w (ORCPT ); Thu, 30 Sep 2021 18:28:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237295AbhI3W2v (ORCPT ); Thu, 30 Sep 2021 18:28:51 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 45AD0C06176A for ; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id me5-20020a17090b17c500b0019af76b7bb4so7830026pjb.2 for ; Thu, 30 Sep 2021 15:27:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BT90LCkstOs8lo40mx0B1wAunL8tBqNcKJloICKrjPQ=; b=RC1OHFyjxktnPOJtgBxUBH84zn9fSZ38Zhq8Wi83FTysSaB3pmYPxNijtvte3iAUAH vwX8yXUhSKj4wUx4cTBotgH0tICjEVvlEeCAknggHrsRqJhJEo1YEciktEIQivGeHh75 bXf7j0es7Sa8cOpUBQ9hSh4c3BOebAV2Grn1I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BT90LCkstOs8lo40mx0B1wAunL8tBqNcKJloICKrjPQ=; b=0JnkVlnyK3Eqe6M8HKY1zoxdgt/TZs3LuDgESPzI1OjGPggQwwxJOxqNXAVJFIlYPU lg0+b8UQ9Ven1sKOTF3D/OabwJOXBg/ipxYBkmX1SFpmsDT/OC6oIwJoFPYAV+VESL6w usVuZBvyjXDlFfW31Rtm4ftvHcvVVp6tFZ2kl0Pm2Ocp5A6icKWeLfB1ovlCbWYVQz+i ZM6tIQcKSaG9ya9mNHR0nTklykERLGkr/92C9Tpau8Uo0SQgn7eOvYFK/yZ107OI0Zn7 dYM56N5ED7U2FF2ECfvF4keR+vaFnFlw8QOJEba+iKiwNeDIhZmxd7gflW5oqIpjobNw a4Mg== X-Gm-Message-State: AOAM530VaniyI+PuJ5VXQVyTHqq2cy18wYydhP5mo4EyT0q4lJsRogRH crHR+IxRCNFBh+DcQeoVn2avCQ== X-Google-Smtp-Source: ABdhPJyMfjpC76ypDMyp3+NLQz9t5YC7E7dp/eMNGw7umGk0OtlkjaRm0QXZ32bmrT30OgO49awBTA== X-Received: by 2002:a17:90b:3e84:: with SMTP id rj4mr15805970pjb.208.1633040827758; Thu, 30 Sep 2021 15:27:07 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id d5sm3528826pjs.53.2021.09.30.15.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:07 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , linux-mm@kvack.org, Joe Perches , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Andy Whitcroft , Dwaipayan Ray , Lukas Bulwahn , Daniel Micay , Dennis Zhou , Tejun Heo , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 3/8] slab: Clean up function prototypes Date: Thu, 30 Sep 2021 15:26:59 -0700 Message-Id: <20210930222704.2631604-4-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=7538; h=from:subject; bh=4zGyVVwporv8ja7RCDSosI43pqvJgeMGlgRJrQqRrvI=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm38v31ZVQqC6PbDKBvc3yzNA+GO2RmSS9QSIVZ CEBfQX6JAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5twAKCRCJcvTf3G3AJmN4D/ 4vNGbbh2hscU2eFW3AiHCopH+L3OHR+xl7Sn5RTZrk1m49xZUHxkzxOJsuxYXhy1A/uknbc29G8NrP KwGpHfPdQ+AjTbaqt5hcMOP7z0ffFK6t6RCGvVu0dBpEMsBk0YDN+lG7xS5aCm9pLgXDCOT5Q/2m30 mtMjTpEVIpRFeLWSpieVomarBGwQ5q2Bv42PWTnUmL8A+iNbY426xt1VkNh67lWl8TfhUKJ6gUd5Nj pLM7NpBQH3P3/0PYamPaJzPA6ShBVCTTc3EIjA/efKRZBDx2fR+8VXRvDgKorjtlh+T6/e/dnpCA4L n1RFBxOnRqGvpOss5V00nWIeJna+DRdZaGWrrDcr7TkT8I6w0NtKlhHmKNwMrrQkJrMl6X2Ebiq4Hz tPBDAVmTdS17XVAGKPhcaBV3krF6NWj9Vtw97hrrY8zQLlDiRW2wPkug5KTrWEkYQXPBK4X8v7bDV4 M2cXzTw6K0meXie2c9UcIR4CYepC5vFj3kfaSmCwUVBb0ZkCehShHV8kqUqxfD9AA2Ahzozh2Vsqc3 P/uTWkwXaEA7ZpL0fagn4IBEdLgKf3pFvNp7yIJkm8pYQPoTcDw3aBQ1t/4nhxk3rRlRFe1IiGTDiM D7eBTHrpDKRzGi1BE3EV+JuvSOzcyI8QnotreEGoS+HGqFBBH5zSzUSD2q5w== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Based on feedback from Joe Perches and Linus Torvalds, regularize the slab function prototypes before making attribute changes. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Vlastimil Babka Cc: linux-mm@kvack.org Signed-off-by: Kees Cook --- include/linux/slab.h | 68 ++++++++++++++++++++++---------------------- 1 file changed, 34 insertions(+), 34 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 083f3ce550bc..d9f14125d7a2 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -152,8 +152,8 @@ struct kmem_cache *kmem_cache_create_usercopy(const char *name, slab_flags_t flags, unsigned int useroffset, unsigned int usersize, void (*ctor)(void *)); -void kmem_cache_destroy(struct kmem_cache *); -int kmem_cache_shrink(struct kmem_cache *); +void kmem_cache_destroy(struct kmem_cache *s); +int kmem_cache_shrink(struct kmem_cache *s); /* * Please use this macro to create slab caches. Simply specify the @@ -181,11 +181,11 @@ int kmem_cache_shrink(struct kmem_cache *); /* * Common kmalloc functions provided by all allocators */ -void * __must_check krealloc(const void *, size_t, gfp_t); -void kfree(const void *); -void kfree_sensitive(const void *); -size_t __ksize(const void *); -size_t ksize(const void *); +void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags); +void kfree(const void *objp); +void kfree_sensitive(const void *objp); +size_t __ksize(const void *objp); +size_t ksize(const void *objp); #ifdef CONFIG_PRINTK bool kmem_valid_obj(void *object); void kmem_dump_obj(void *object); @@ -426,8 +426,8 @@ static __always_inline unsigned int __kmalloc_index(size_t size, #endif /* !CONFIG_SLOB */ void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __malloc; -void *kmem_cache_alloc(struct kmem_cache *, gfp_t flags) __assume_slab_alignment __malloc; -void kmem_cache_free(struct kmem_cache *, void *); +void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; +void kmem_cache_free(struct kmem_cache *s, void *objp); /* * Bulk allocation and freeing operations. These are accelerated in an @@ -436,8 +436,8 @@ void kmem_cache_free(struct kmem_cache *, void *); * * Note that interrupts must be enabled when calling these functions. */ -void kmem_cache_free_bulk(struct kmem_cache *, size_t, void **); -int kmem_cache_alloc_bulk(struct kmem_cache *, gfp_t, size_t, void **); +void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); +int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p); /* * Caller must not use kfree_bulk() on memory not originally allocated @@ -450,7 +450,8 @@ static __always_inline void kfree_bulk(size_t size, void **p) #ifdef CONFIG_NUMA void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __malloc; -void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node) __assume_slab_alignment __malloc; +void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment + __malloc; #else static __always_inline void *__kmalloc_node(size_t size, gfp_t flags, int node) { @@ -464,25 +465,24 @@ static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t f #endif #ifdef CONFIG_TRACING -extern void *kmem_cache_alloc_trace(struct kmem_cache *, gfp_t, size_t) __assume_slab_alignment __malloc; +extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) + __assume_slab_alignment __malloc; #ifdef CONFIG_NUMA -extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) __assume_slab_alignment __malloc; +extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) __assume_slab_alignment __malloc; #else -static __always_inline void * -kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) +static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, + gfp_t gfpflags, int node, + size_t size) { return kmem_cache_alloc_trace(s, gfpflags, size); } #endif /* CONFIG_NUMA */ #else /* CONFIG_TRACING */ -static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, - gfp_t flags, size_t size) +static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, + size_t size) { void *ret = kmem_cache_alloc(s, flags); @@ -490,10 +490,8 @@ static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, return ret; } -static __always_inline void * -kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) +static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, + int node, size_t size) { void *ret = kmem_cache_alloc_node(s, gfpflags, node); @@ -502,13 +500,14 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s, } #endif /* CONFIG_TRACING */ -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __malloc; +extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment + __malloc; #ifdef CONFIG_TRACING -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __malloc; +extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) + __assume_page_alignment __malloc; #else -static __always_inline void * -kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) +static __always_inline void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) { return kmalloc_order(size, flags, order); } @@ -638,8 +637,8 @@ static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) * @new_size: new size of a single member of the array * @flags: the type of memory to allocate (see kmalloc) */ -static __must_check inline void * -krealloc_array(void *p, size_t new_n, size_t new_size, gfp_t flags) +static inline void * __must_check krealloc_array(void *p, size_t new_n, size_t new_size, + gfp_t flags) { size_t bytes; @@ -668,7 +667,7 @@ static inline void *kcalloc(size_t n, size_t size, gfp_t flags) * allocator where we care about the real place the memory allocation * request comes from. */ -extern void *__kmalloc_track_caller(size_t, gfp_t, unsigned long); +extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller); #define kmalloc_track_caller(size, flags) \ __kmalloc_track_caller(size, flags, _RET_IP_) @@ -691,7 +690,8 @@ static inline void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) #ifdef CONFIG_NUMA -extern void *__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long); +extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, + unsigned long caller); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) From patchwork Thu Sep 30 22:27:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12538021 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1A85FC4167D for ; Thu, 30 Sep 2021 22:27:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BF07E61882 for ; Thu, 30 Sep 2021 22:27:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347745AbhI3W2y (ORCPT ); Thu, 30 Sep 2021 18:28:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344758AbhI3W2x (ORCPT ); Thu, 30 Sep 2021 18:28:53 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22280C06176F for ; Thu, 30 Sep 2021 15:27:10 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id k23-20020a17090a591700b001976d2db364so5875203pji.2 for ; Thu, 30 Sep 2021 15:27:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zygAiLSQoNsoIY1uvxoA1hwJtuo0y1KIlpyKLcao1qw=; b=YGS0FrT8KgFYxIsPD1Vmo9YWUEsTitNyDANqflXGAcsxo5zFbryKfJ3PfKygTk25ql O0fmHhCXQZOedvwacR/EjYGFf81Coe7XD5U+og7DwDF4ETb13Zrj2+ZmMsrzgEJNqYvZ PG1VS5MwQ61xDXx3Kmy6wRy0VfGli/ZIYblIM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zygAiLSQoNsoIY1uvxoA1hwJtuo0y1KIlpyKLcao1qw=; b=MLYgoYb7ZBdQS5WFUUDSsNvf8a0xFhsO6JeLUzCb1sFBF5embUoDkPVilS6l4vKs1T VAr/NeD8Be9IvOcRNFbCzTWY0XsQz/cQF0rA1qbrjHdPD47sGCSxtuATowdGN+6TDGB5 HH07ECm2gWPuOR2yAxFv2CvienaWsUyRJcrsnQN0m8nJUauE1bCy2oOQaLJ2jr8Uh1uK hvcLJWwTvhluE5lYJd5ZF2Qx4nCLO7rhjCCsD75dMpppMfpKJlrmScX6bcukE1fjObnP Megr182WWv1hrdxeqOE8F/UzNfuaJz85n+mz079Stx7d6bBzMo/GDYt1Yr85rIeyHOx+ dHsA== X-Gm-Message-State: AOAM5306X6uucrH/A0fMUkj6myMNTwsS1DreOTSWId2Yv9WcGLZ4ENCg kUHVMB6thXzmCC/v7nGDSV8g7Q== X-Google-Smtp-Source: ABdhPJwt90voN5Dif7tX7/zONpBmQ3a4brdmA7e+RYd3OUtCE8kmT7pZTDjrBs2yiQk3xt/RqH38tQ== X-Received: by 2002:a17:90b:4d07:: with SMTP id mw7mr15084160pjb.66.1633040829698; Thu, 30 Sep 2021 15:27:09 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id 127sm4084858pfw.10.2021.09.30.15.27.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:07 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Andy Whitcroft , Dennis Zhou , Dwaipayan Ray , Joe Perches , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Tejun Heo , Daniel Micay , Nick Desaulniers , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 4/8] slab: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:00 -0700 Message-Id: <20210930222704.2631604-5-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=9514; h=from:subject; bh=jz2EabWJ4V7UBTDnXqnj7V0rrqkbx+8deuoqLgEzAyQ=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm312aTAJy2rK6CUEO/3iGG5lrIgRt8I0BPxLQA ws2ETQaJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5twAKCRCJcvTf3G3AJoJsEA C0rp8pQeMkm2vohx8uuRwQGOg6XG6s9CbtQeM4TyD50PjHE/lRZdMz9+1gs0isac2J/MXzvuqkuUrH 33hBSKyFFTWa7OsdpupncwDiNBrgYjdmG5ZDm4KnGt5fmFjoyHsSkoM2m/q9nvk9HNMYCRj28rIsxw thd+1X5Zp4lfCeZcuQ/Rwf89CFtQTYq4tUN0942g3ii5+uVMMVC//dHySTHaSoSsJQgaYF1cD6BtqC O8xOT4G85XQc77P3POwRAKIc25jmPB7KRDuvnA4W2QDZm7bZOd1bosuM1PIUqm2OLm3yDVJHrNqSTL K/RcyNzkD3XM8gCcXCzzquG85b0e6uLU0gtnToglQo8DrMyVMgEjgCBuYIDZsAubJrfRsUDHrG9Q6M yye6Cz/CBZPzzQp0EXnWrBYQiJcHnuaawTJy4zNUVmlQ4R+raTrwRJYUEoL0lavFq0DXTnTwrjy/yT n5jWoUBS+gIqkOrp/BR8PYnrQjmT5sziW4d5YB4BhuNWhj2KXYyy4DEIhAHk7WDdo28MeVg8xlsUZU 6ipMnWY/3LzGFN6RoJ2fdothXH0DrptbevsKo/R0owUDLmZ+DHaWygKTwv1hSBm6k7oGU3CQmxwwbP Ju2UNQEXJS77ak+emdTaVAlpSNgikS3plbXgSD9VtpE3izBIu05Jhfhs+tDg== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for regular kmalloc interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Vlastimil Babka Cc: Andy Whitcroft Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Tejun Heo Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook Reviewed-by: Nick Desaulniers --- include/linux/slab.h | 61 ++++++++++++++++++++++++-------------------- 1 file changed, 33 insertions(+), 28 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index d9f14125d7a2..844b776deecf 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -181,7 +181,7 @@ int kmem_cache_shrink(struct kmem_cache *s); /* * Common kmalloc functions provided by all allocators */ -void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags); +void * __must_check krealloc(const void *objp, size_t new_size, gfp_t flags) __alloc_size(2); void kfree(const void *objp); void kfree_sensitive(const void *objp); size_t __ksize(const void *objp); @@ -425,7 +425,7 @@ static __always_inline unsigned int __kmalloc_index(size_t size, #define kmalloc_index(s) __kmalloc_index(s, true) #endif /* !CONFIG_SLOB */ -void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __malloc; +void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __alloc_size(1); void *kmem_cache_alloc(struct kmem_cache *s, gfp_t flags) __assume_slab_alignment __malloc; void kmem_cache_free(struct kmem_cache *s, void *objp); @@ -449,11 +449,12 @@ static __always_inline void kfree_bulk(size_t size, void **p) } #ifdef CONFIG_NUMA -void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __malloc; +void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment + __alloc_size(1); void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t flags, int node) __assume_slab_alignment __malloc; #else -static __always_inline void *__kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline __alloc_size(1) void *__kmalloc_node(size_t size, gfp_t flags, int node) { return __kmalloc(size, flags); } @@ -466,23 +467,23 @@ static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t f #ifdef CONFIG_TRACING extern void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) - __assume_slab_alignment __malloc; + __assume_slab_alignment __alloc_size(3); #ifdef CONFIG_NUMA extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, - int node, size_t size) __assume_slab_alignment __malloc; + int node, size_t size) __assume_slab_alignment + __alloc_size(4); #else -static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, int node, - size_t size) +static __always_inline __alloc_size(4) void *kmem_cache_alloc_node_trace(struct kmem_cache *s, + gfp_t gfpflags, int node, size_t size) { return kmem_cache_alloc_trace(s, gfpflags, size); } #endif /* CONFIG_NUMA */ #else /* CONFIG_TRACING */ -static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, - size_t size) +static __always_inline __alloc_size(3) void *kmem_cache_alloc_trace(struct kmem_cache *s, + gfp_t flags, size_t size) { void *ret = kmem_cache_alloc(s, flags); @@ -501,19 +502,20 @@ static __always_inline void *kmem_cache_alloc_node_trace(struct kmem_cache *s, g #endif /* CONFIG_TRACING */ extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment - __malloc; + __alloc_size(1); #ifdef CONFIG_TRACING extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) - __assume_page_alignment __malloc; + __assume_page_alignment __alloc_size(1); #else -static __always_inline void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) +static __always_inline __alloc_size(1) void *kmalloc_order_trace(size_t size, gfp_t flags, + unsigned int order) { return kmalloc_order(size, flags, order); } #endif -static __always_inline void *kmalloc_large(size_t size, gfp_t flags) +static __always_inline __alloc_size(1) void *kmalloc_large(size_t size, gfp_t flags) { unsigned int order = get_order(size); return kmalloc_order_trace(size, flags, order); @@ -573,7 +575,7 @@ static __always_inline void *kmalloc_large(size_t size, gfp_t flags) * Try really hard to succeed the allocation but fail * eventually. */ -static __always_inline void *kmalloc(size_t size, gfp_t flags) +static __always_inline __alloc_size(1) void *kmalloc(size_t size, gfp_t flags) { if (__builtin_constant_p(size)) { #ifndef CONFIG_SLOB @@ -595,7 +597,7 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags) return __kmalloc(size, flags); } -static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node) +static __always_inline __alloc_size(1) void *kmalloc_node(size_t size, gfp_t flags, int node) { #ifndef CONFIG_SLOB if (__builtin_constant_p(size) && @@ -619,7 +621,7 @@ static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node) * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *kmalloc_array(size_t n, size_t size, gfp_t flags) { size_t bytes; @@ -637,8 +639,10 @@ static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) * @new_size: new size of a single member of the array * @flags: the type of memory to allocate (see kmalloc) */ -static inline void * __must_check krealloc_array(void *p, size_t new_n, size_t new_size, - gfp_t flags) +static inline __alloc_size(2, 3) void * __must_check krealloc_array(void *p, + size_t new_n, + size_t new_size, + gfp_t flags) { size_t bytes; @@ -654,7 +658,7 @@ static inline void * __must_check krealloc_array(void *p, size_t new_n, size_t n * @size: element size. * @flags: the type of memory to allocate (see kmalloc). */ -static inline void *kcalloc(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *kcalloc(size_t n, size_t size, gfp_t flags) { return kmalloc_array(n, size, flags | __GFP_ZERO); } @@ -667,12 +671,13 @@ static inline void *kcalloc(size_t n, size_t size, gfp_t flags) * allocator where we care about the real place the memory allocation * request comes from. */ -extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller); +extern void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) + __alloc_size(1); #define kmalloc_track_caller(size, flags) \ __kmalloc_track_caller(size, flags, _RET_IP_) -static inline void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, - int node) +static inline __alloc_size(1, 2) void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, + int node) { size_t bytes; @@ -683,7 +688,7 @@ static inline void *kmalloc_array_node(size_t n, size_t size, gfp_t flags, return __kmalloc_node(bytes, flags, node); } -static inline void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) +static inline __alloc_size(1, 2) void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) { return kmalloc_array_node(n, size, flags | __GFP_ZERO, node); } @@ -691,7 +696,7 @@ static inline void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) #ifdef CONFIG_NUMA extern void *__kmalloc_node_track_caller(size_t size, gfp_t flags, int node, - unsigned long caller); + unsigned long caller) __alloc_size(1); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) @@ -716,7 +721,7 @@ static inline void *kmem_cache_zalloc(struct kmem_cache *k, gfp_t flags) * @size: how many bytes of memory are required. * @flags: the type of memory to allocate (see kmalloc). */ -static inline void *kzalloc(size_t size, gfp_t flags) +static inline __alloc_size(1) void *kzalloc(size_t size, gfp_t flags) { return kmalloc(size, flags | __GFP_ZERO); } @@ -727,7 +732,7 @@ static inline void *kzalloc(size_t size, gfp_t flags) * @flags: the type of memory to allocate (see kmalloc). * @node: memory node from which to allocate */ -static inline void *kzalloc_node(size_t size, gfp_t flags, int node) +static inline __alloc_size(1) void *kzalloc_node(size_t size, gfp_t flags, int node) { return kmalloc_node(size, flags | __GFP_ZERO, node); } From patchwork Thu Sep 30 22:27:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12538025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67EB6C4321E for ; Thu, 30 Sep 2021 22:27:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 513C46128C for ; Thu, 30 Sep 2021 22:27:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348322AbhI3W25 (ORCPT ); Thu, 30 Sep 2021 18:28:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33608 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245667AbhI3W2y (ORCPT ); Thu, 30 Sep 2021 18:28:54 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E688C061771 for ; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id 66so7222214pgc.9 for ; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1dXnvTeLUCigPMMnfClviJlp/0W0l8ED7JBU+sWSCTQ=; b=bMZ24ZC9xNEa42Gpp5q0OztRhWlyR/ZPd/+MgSdXTcvZCQG7O0+Wdx+raEdQ/4tzNy gQaJq/poDZ3jwv+wiaGs/Td1V8f6U4K6bVJiyR+Rz5hJdDKQY54CKdGwZBd2hCyceMxX ZTVawkl8i/WKLzQSXc3qNgvt1NrL4Lw5chYf8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1dXnvTeLUCigPMMnfClviJlp/0W0l8ED7JBU+sWSCTQ=; b=5fK6GQqcwShNvBt0C1aFAznVadf8/8K+IEtmQ7+ydgp90h6BA7l0F0iOOd6ngaF7ei Twmbr9TfTq0yOkmCLRgUJWxJ7GZKXLuUD2CSSWAq1iDebvYa5gsXIR3bMUu9B9UA8eU9 timkwLY5ckFb2EeTiButXzBRXTJmcyob+NT/2G0212ZwQOfz42H7FLSQYdLdYUO4TvnN kMGdsmgG0tIhiyi1rNc/snbGzBFegxtNnz/xuNpjuOzA9oPj2jWHLys9fafRdqwIB92T goDEaaWahalwyI2fa8Scw58gDo3gY7ebu1q1qZpQNNXkCLITzo3OHNvsIdkhMSTrgOI7 L8uw== X-Gm-Message-State: AOAM530KpYMTUXIbZT2kd5PkvyqMQWh3IMbWnzQJSrkO080yIkFmktTP 5SUsR3SwQPGxPz78sOCTczJaFQ== X-Google-Smtp-Source: ABdhPJyrj4NqgodUUPoi2IgyK2epGmWmMuNVscNW84sxyOH9pkrPwIMAtF1vD9v9H9UnFzvcAhJSxA== X-Received: by 2002:a05:6a00:2355:b0:44c:86c:49f4 with SMTP id j21-20020a056a00235500b0044c086c49f4mr2934102pfj.58.1633040831209; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id z11sm4343540pff.144.2021.09.30.15.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:09 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Andy Whitcroft , Dennis Zhou , Dwaipayan Ray , Joe Perches , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Tejun Heo , Daniel Micay , Nick Desaulniers , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 5/8] mm/kvmalloc: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:01 -0700 Message-Id: <20210930222704.2631604-6-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2910; h=from:subject; bh=SyAqtHelf0a2iaKyecEWVo/GxwzRnhHwreNN0BUqqAM=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm34uKneQvmJxQTing9NsARvbB4YKVUTexQ0P2y 4HvehcGJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5twAKCRCJcvTf3G3AJsMKD/ 9B9MOpTz9EJCrfrNonEyW3Gcj0s3yDYKmzoBbslhRxx5a0fa99XtVE+I2QWPrCjRokk1liutNYQLoo yLJglYmUbLAFF3WWNXOokAxVRQq8fdfavcfOtP/bRUGwY/F5T1hq2GLFPf8Pb9CYi5x6t/xmJhA67j +xXO5cmxVNE3sx3rIfq1SlyI6mmWvUthEwPSh7eresXvI8dGbuowLNZeO2RM8rj0ztdJxOZwCwojE3 3Nh0gLgYdo9dhmk/n0ZSkivI/4ngDHgEKhWqtPGPB9huz6hMX/SLF43qOm2yp0TwGrbG3z8ttQI8LP Dn7M0W254iTtwkG14RIzU6p7JuSjCfka1aYXPm37kCNv05P7qGr4W9bt89PJCHQCNa65eGirF14Ar4 lswTo3DH3SrXOZYViivXsbrYJmqlvorKH9Qi2sc6wnYb3JPmm6L157vJb3nZrJp7h9vpntHisy9ntI wELx7fONPE9NZHbruTuE6ONgalaoV7d3pxMPeGlKvFvhywPhHBW+GIxJ0ngBMLXTBE5//Ox1Yx/pJ3 VVRY6wlDwRQ/ZGk11cdhPFYF8qRhC1VU3fdZQsEEXaWXNBY//nG3Jm91tU4CTuU3FVuWHlWxyjucol zE7IVVEcgE/wXQAoREtVO7sm2dQSi9j/6da9yKArFV5nlaE7fduEFEigworA== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for regular kvmalloc interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Vlastimil Babka Cc: Andy Whitcroft Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Tejun Heo Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook Reviewed-by: Nick Desaulniers --- include/linux/mm.h | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 73a52aba448f..03dfb466d4f5 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -799,21 +799,21 @@ static inline int is_vmalloc_or_module_addr(const void *x) } #endif -extern void *kvmalloc_node(size_t size, gfp_t flags, int node); -static inline void *kvmalloc(size_t size, gfp_t flags) +extern void *kvmalloc_node(size_t size, gfp_t flags, int node) __alloc_size(1); +static inline __alloc_size(1) void *kvmalloc(size_t size, gfp_t flags) { return kvmalloc_node(size, flags, NUMA_NO_NODE); } -static inline void *kvzalloc_node(size_t size, gfp_t flags, int node) +static inline __alloc_size(1) void *kvzalloc_node(size_t size, gfp_t flags, int node) { return kvmalloc_node(size, flags | __GFP_ZERO, node); } -static inline void *kvzalloc(size_t size, gfp_t flags) +static inline __alloc_size(1) void *kvzalloc(size_t size, gfp_t flags) { return kvmalloc(size, flags | __GFP_ZERO); } -static inline void *kvmalloc_array(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *kvmalloc_array(size_t n, size_t size, gfp_t flags) { size_t bytes; @@ -823,13 +823,13 @@ static inline void *kvmalloc_array(size_t n, size_t size, gfp_t flags) return kvmalloc(bytes, flags); } -static inline void *kvcalloc(size_t n, size_t size, gfp_t flags) +static inline __alloc_size(1, 2) void *kvcalloc(size_t n, size_t size, gfp_t flags) { return kvmalloc_array(n, size, flags | __GFP_ZERO); } -extern void *kvrealloc(const void *p, size_t oldsize, size_t newsize, - gfp_t flags); +extern void *kvrealloc(const void *p, size_t oldsize, size_t newsize, gfp_t flags) + __alloc_size(3); extern void kvfree(const void *addr); extern void kvfree_sensitive(const void *addr, size_t len); From patchwork Thu Sep 30 22:27:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12538027 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB249C43219 for ; Thu, 30 Sep 2021 22:27:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFA7761A7C for ; Thu, 30 Sep 2021 22:27:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345682AbhI3W2z (ORCPT ); Thu, 30 Sep 2021 18:28:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345436AbhI3W2y (ORCPT ); Thu, 30 Sep 2021 18:28:54 -0400 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A5D7C06176D for ; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) Received: by mail-pg1-x534.google.com with SMTP id s11so7620335pgr.11 for ; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ESYAKcpw8CJ6aupot12QGhm/1PtWxxa9SFKXu5/HUJo=; b=HzrZAfthMsoaL00e6u9lKkzh3vUjofafLRs6T1cTPLgrjzNgkKDIjJBULYa+JfDJrZ 4+ypFNtLvatgK4uaFZtRLIZnR3rZX/i871dgEGNC2tykmEZHc9mUiy7z5CttTDHgWDmW h26rGT4LCopjx4sbUvZIRJHxHuCCfnzwpYb5s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ESYAKcpw8CJ6aupot12QGhm/1PtWxxa9SFKXu5/HUJo=; b=HVUuZGe/4w6s5HRZqapFUmxp+ZQaJii1871jLzMLxQ4+Xz6zc/CVHbFJJlRYf7N98d fnpdifMeIXTEPV5+gu1Mz8Gmzeae2vFXibcKW3+4+9Q3hJ7/slmP8Vs4pl4gJ+i8LssG W5/esHURL9OE6/6MV5wzQ1WvYriocuvtAc9ubOvE8WTg7oDP0C41KXUKImWNp2npmh9T UeIdql1xV4Hig9Uipu39Ft+uEaTLx2P0FRYIpXRu9PPkOoRsxZlH9iWBCS1XeaVkCINz fUW1NLliE+ux37yNLDdziqjpy4Ub2+pwF4lox5xk6A/xjY64NseGB/xDVnjuJBdeCFPd 1IEQ== X-Gm-Message-State: AOAM533LXjpXKafOhCxXYiavqH5yNK+/lrmi3ENoNZ3XwNQRqsDLwLQ0 mm0L3OGCcCi2d8pjghRxbqtJ7w== X-Google-Smtp-Source: ABdhPJzjKyvNdx2y7W3tZ8k06kf7AdxjQEHIvWHm/wxcPUHyEGBfez366DWf4B1AdFCHRgi3cQWBqw== X-Received: by 2002:a65:62d1:: with SMTP id m17mr6913015pgv.370.1633040830779; Thu, 30 Sep 2021 15:27:10 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id q4sm4066225pfl.50.2021.09.30.15.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:09 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Andy Whitcroft , Christoph Lameter , David Rientjes , Dennis Zhou , Dwaipayan Ray , Joe Perches , Joonsoo Kim , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Pekka Enberg , Tejun Heo , Vlastimil Babka , Daniel Micay , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 6/8] mm/vmalloc: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:02 -0700 Message-Id: <20210930222704.2631604-7-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2969; h=from:subject; bh=PqMG9LUKCcMh794yj62SW/bUvosL4x5yAHHZWjpUkyg=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm34phzSlXdvuTp3+opvclRgdeGSmO+NwyepuYM EN6zSnyJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5twAKCRCJcvTf3G3AJhT0EA CJkQ005XqUiJSOcNF6agctDoJhc+CVk4wZ2PG6HQUVmHJKwFf67w4PZ9g3p1fsjzXDp9eaYZefJFUi 6JHZRF5jjU0UYYfYT+DIAnlZM5F59JkpxVExn9yFX1HQyiWAe5/l+5ivLQmIspyvdgPt0GkAfHmeO/ SAzlS5DsbIsmGTLy3DBecFEarOb1YWkPC2Bv7+6PBHt17TVvlpe3KrKvykV7blZWLbAXP3gFo73aO7 YKXx06jgVvyoFXBmYg1fbqfB/TuIRdafZpd4e4IXdqUM/sz3BfrZ5Sszh4wefHDCoDpNdwCNaq+coA W1Hf6kjXkHCK0W6df8mlNFWzVvlCrt5xS488eSFMmPYbk3ImPlxfMNlYoW21FSY3z2SGV6UMa9UTce PW5Ozc21RqIbDwIc1m/H97hfNvFCH0evpi6086NOBh3kyFU3RbIngjeRpk+C+awlD56ONUjPYf4dwv 8DssvV22U43dK6J18SZPEd1qOnylxxE5blxqg4EpCCncxUm7sQEmCYqMwp4O4BK73wNhCzMt0+Qi2u AWPmBNoHgN9dPBao7l6NRLBvFtVtgn5ojCl6tSMu0ueoGwYkLf2APnPgslcTmsO0a6TiP0U8MDgrpk OD0LxB+07EMcZPI2t5HQuWLuU9N2QxGPHO2KQqts1PVM1qqJ2ZwHw3aPkKWw== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for appropriate vmalloc allocator interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Cc: Andy Whitcroft Cc: Christoph Lameter Cc: David Rientjes Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Joonsoo Kim Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Pekka Enberg Cc: Tejun Heo Cc: Vlastimil Babka Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook --- include/linux/vmalloc.h | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 671d402c3778..0ed56fc10c11 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -136,21 +136,21 @@ static inline void vmalloc_init(void) static inline unsigned long vmalloc_nr_pages(void) { return 0; } #endif -extern void *vmalloc(unsigned long size); -extern void *vzalloc(unsigned long size); -extern void *vmalloc_user(unsigned long size); -extern void *vmalloc_node(unsigned long size, int node); -extern void *vzalloc_node(unsigned long size, int node); -extern void *vmalloc_32(unsigned long size); -extern void *vmalloc_32_user(unsigned long size); -extern void *__vmalloc(unsigned long size, gfp_t gfp_mask); +extern void *vmalloc(unsigned long size) __alloc_size(1); +extern void *vzalloc(unsigned long size) __alloc_size(1); +extern void *vmalloc_user(unsigned long size) __alloc_size(1); +extern void *vmalloc_node(unsigned long size, int node) __alloc_size(1); +extern void *vzalloc_node(unsigned long size, int node) __alloc_size(1); +extern void *vmalloc_32(unsigned long size) __alloc_size(1); +extern void *vmalloc_32_user(unsigned long size) __alloc_size(1); +extern void *__vmalloc(unsigned long size, gfp_t gfp_mask) __alloc_size(1); extern void *__vmalloc_node_range(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, - const void *caller); + const void *caller) __alloc_size(1); void *__vmalloc_node(unsigned long size, unsigned long align, gfp_t gfp_mask, - int node, const void *caller); -void *vmalloc_no_huge(unsigned long size); + int node, const void *caller) __alloc_size(1); +void *vmalloc_no_huge(unsigned long size) __alloc_size(1); extern void vfree(const void *addr); extern void vfree_atomic(const void *addr); From patchwork Thu Sep 30 22:27:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12538029 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E42A2C433FE for ; Thu, 30 Sep 2021 22:27:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D6ADD61881 for ; Thu, 30 Sep 2021 22:27:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344158AbhI3W3D (ORCPT ); Thu, 30 Sep 2021 18:29:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33616 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344758AbhI3W2z (ORCPT ); Thu, 30 Sep 2021 18:28:55 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 463EEC06176A for ; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id pg10so4754069pjb.5 for ; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=u2ub9aDUdGkR7a45M6+RayGW3Mc5Nzpp+BqvE+GQyD4=; b=E4117kGofCjsUGdfSRTGV+cTc7zkGteSQS/YjKtNttFBJetk/lbVRQOWBkP0cou6FX nquNIIKcACgN4NjJK0Dk4cvoGaQO1QOvxc7b+qSmM0G/pcadnqVz3fSeHbawfF/+5SxM W5CUTPB7Lbr4gYH5opiwBxjGFHpaqPuAJWxhM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u2ub9aDUdGkR7a45M6+RayGW3Mc5Nzpp+BqvE+GQyD4=; b=mWDmW/3Y8IKxYGV3/HcEESIcg0VSrDb2R3cKHpkexNZtIRU2TU1VmVS/WvOqsiIuwu oIyOFk/3rOd+kGZ4EseWDzCqvfTUmjT28bG+59Nitd8UTZwtk3tkogMp0HcbI1nmCEnV VVl0/zK+7iKJbc+yagV7xl/IZaAU+F7tldC++jPJefEU0q1sxPjMTev3HVDyQs7dsuMo /+IzblJG4qM2CgfQ84ImG0qy5SYR+nIuYdH9XSqB0OgaLXDtaWKgUXAIzFVSckUmM5Fr 6zW4cQCqlDEC2JXNZUTTZ4Q3B2szpfh/zF9APWz4gIoKOvggXP1xeJEaQRBc2WVvTsF4 lwGA== X-Gm-Message-State: AOAM5301BA8mdxzHCwyPCUmZxPbA1oBShMWP4l+E0Qwc59GEa8INp0Ae 5d4YGDmK1DTPddY/QXVSxYFovg== X-Google-Smtp-Source: ABdhPJwQdxICM7qve4++qlkIYIKGkfShLBTtRkjuyHcuH/G/SHg9YX/oL/zPE0SniAua+b9zgN40Qw== X-Received: by 2002:a17:90a:47:: with SMTP id 7mr9261738pjb.46.1633040831839; Thu, 30 Sep 2021 15:27:11 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id l185sm4055195pfd.29.2021.09.30.15.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:09 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Andy Whitcroft , Christoph Lameter , David Rientjes , Dennis Zhou , Dwaipayan Ray , Joe Perches , Joonsoo Kim , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Pekka Enberg , Tejun Heo , Vlastimil Babka , Daniel Micay , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 7/8] mm/page_alloc: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:03 -0700 Message-Id: <20210930222704.2631604-8-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=1867; h=from:subject; bh=yfsJRZe5VPlXhNpESupqg/Lx3TB+JxswN2/B9c/1qRE=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm4IH1a0nV/SMwCAsRFQGt/UlwMh4gkN7oMNoN0 ZBYvfZOJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5uAAKCRCJcvTf3G3AJnyND/ 9Jn6MqWe77b4tIIKVMs4JhXxwCeYTw1fq3mDnWNU4lSzNE/x3S6AYm3L/txTVyeheRfpOl7SOWwk70 /CWgalV93SIVwrfyQk3eIHwwaYv5XjUxMKTq9OmuY8shTCHA0Dmu9LfiSuLzV9yZ8WUaBEymsdsauL A3bK8vkp9rzhLE4geifpHgJ2ELbeD+A+KNRW5ojXhXB4SBMsdYtEqRGGSsMURPAyAOIn14yPxBLu4M ILNp4YRWjuhJWMN2YAc262qkmCtJjdPPTiQirjxUttsVUtLbK3BsOqjXu+T/dGyFDCb1+eCMJ0arG5 LR10YyJAmOxmECGqzhaP9h18bHNuGDnHAxpPXDn+AhFwwI/Qw5u7+QnYpaOKUkucflZJbd4KykcidL NxsnbLLexoPG7e3txH9ZWIYh72T0Se7V2u93sp9AhF80AYirhFQw26z3rqV3XeCFQkceyWEaZmiras 5eoD20fPo4Dlt04k/F5h2NaBn/LUiB8YxoMnHv2hVt8Ft/cqmhNRqkI/z5Nux69TO3JMtdRDX67id+ /uJBIcZFdewl3ka+oWfCVb3bpMGdaHVp3+bmLiQGZXR1z6tnH3gUZVUECYU98Kh0AW4LuiNi3A/xGa 0WhDo8ht51lnbE7T8FRhQ7FL8vO50a8MszXtDGPQpVltL0E0RubMc+2FVe6A== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for appropriate page allocator interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Cc: Andy Whitcroft Cc: Christoph Lameter Cc: David Rientjes Cc: Dennis Zhou Cc: Dwaipayan Ray Cc: Joe Perches Cc: Joonsoo Kim Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Pekka Enberg Cc: Tejun Heo Cc: Vlastimil Babka Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook --- include/linux/gfp.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 55b2ec1f965a..fbd4abc33f24 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -608,9 +608,9 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); extern unsigned long get_zeroed_page(gfp_t gfp_mask); -void *alloc_pages_exact(size_t size, gfp_t gfp_mask); +void *alloc_pages_exact(size_t size, gfp_t gfp_mask) __alloc_size(1); void free_pages_exact(void *virt, size_t size); -void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask); +__meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(1); #define __get_free_page(gfp_mask) \ __get_free_pages((gfp_mask), 0) From patchwork Thu Sep 30 22:27:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 12538031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E406C433F5 for ; Thu, 30 Sep 2021 22:27:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4C9BB61881 for ; Thu, 30 Sep 2021 22:27:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349206AbhI3W3J (ORCPT ); Thu, 30 Sep 2021 18:29:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348612AbhI3W3A (ORCPT ); Thu, 30 Sep 2021 18:29:00 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C3DD9C061770 for ; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id w14so6278571pfu.2 for ; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xTdlLb7GtJ4Tj/wecFW9tRbbhL9s43+6wka+jFobq0s=; b=M2T++KtM+KT2mzH+OBTryDjwFQD7Zafb7dljFTDxRRFvTMYRNqYSZO3d23H0Hp+ARh XnLRQkcyDjS13ZBcgm/XGoaUSFBG/8cz5KRpJ01GC8+dbkS+4tfJappB4jFZpL7ntoAw X4ckafKQK4lBJej4HBhFSlB3Vh+OP4N4E9W5U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xTdlLb7GtJ4Tj/wecFW9tRbbhL9s43+6wka+jFobq0s=; b=jNkdMYbxtUA61HO9NU0bXq/aAGX6TpLKqIOYcU3tkHttbLaZGk+lBhVeX9d2NCBxQr jtjmAbXgTAcNYNjzOagMHlw3mbUNU3jPEflM5MwLABs2VuMDB/B/cyCGS5cNH6qXtitp LY75wxFSq4lKaF0CiHeMCRXzijCxS68ZrJ/RYUAl5FFvjz/of30Cmr8L63UahgSCKk5p a8LdUbqrav2KAqoPVZg6znB5KETTf62SWzi8XxT0sk7y63n/XVmSMW+ca1Ot60FtU31D R6EJ13PDVUyQ/N3DYMTfx+CMBzWl+770dJDSWrNFpZ2uxUYbcaSnqvZ5PgmO3M4WBCdB hGaQ== X-Gm-Message-State: AOAM532I5/5r6vfQdV14Sd2DoyQlR5rUPCodGF+eoli+oJswISNZfmGh GzVpJ1N1fz1hcHWs7HSJO2hjOg== X-Google-Smtp-Source: ABdhPJyH4kS63+xB0mJe00QsSwG0CrKxoOyE4G+LyWXECWixUVGxAxEZvSJ0YkDdDYwi/3dkJrmLvQ== X-Received: by 2002:a63:20f:: with SMTP id 15mr6733460pgc.319.1633040832346; Thu, 30 Sep 2021 15:27:12 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id x20sm3606310pjp.48.2021.09.30.15.27.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Sep 2021 15:27:09 -0700 (PDT) From: Kees Cook To: Andrew Morton Cc: Kees Cook , Dennis Zhou , Tejun Heo , Christoph Lameter , Andy Whitcroft , David Rientjes , Dwaipayan Ray , Joe Perches , Joonsoo Kim , Lukas Bulwahn , Miguel Ojeda , Nathan Chancellor , Nick Desaulniers , Pekka Enberg , Vlastimil Babka , Daniel Micay , Masahiro Yamada , Michal Marek , clang-built-linux@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-hardening@vger.kernel.org Subject: [PATCH v3 8/8] percpu: Add __alloc_size attributes for better bounds checking Date: Thu, 30 Sep 2021 15:27:04 -0700 Message-Id: <20210930222704.2631604-9-keescook@chromium.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210930222704.2631604-1-keescook@chromium.org> References: <20210930222704.2631604-1-keescook@chromium.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=2486; h=from:subject; bh=9EqXOckHEyfKAa0MxxJtth0Elvb+UhJZNEFB1gjQABo=; b=owEBbQKS/ZANAwAKAYly9N/cbcAmAcsmYgBhVjm4aUkC0jHO3VL+EQHu/RyvRmlitX3fuanr+Ryz L48wiCKJAjMEAAEKAB0WIQSlw/aPIp3WD3I+bhOJcvTf3G3AJgUCYVY5uAAKCRCJcvTf3G3AJlTMD/ 9+Ge1+7x+F/NL44UJJEUkUcosWJRMsIVppsrgGAuv9sU+uQ4EUIlaCObfvfV8cQNsQ6mP4mW4VfHIz qmPEIPllZ1wNum6F1gGAV/KD8oEqyMXXMWBrbSbRfytOI7+SXvYiHYqBOFm/XXNklvYtz1lyFDHFUt AuMPhD6RyaLWnhEnXwBozDpbP82J42TANIyE76AFDLgdBSimeg3/NR8zMhMF+XU9kj0HP8P7kgyv2v YdPjj/Ee2gnHsydjY8nTzEyeFysOuiPmZal5y8Boj0gtOp2hLAkfizpXoSbdiKg1nG3zfW23FpzxZn gPXrPrXAn+bkDFTWB+HYm+GR29fuTj7b9z3P3PK0BjcvUcdV+phsUJ3RkAy96FW1dnKpp2M3x7wVml RitRdRvoVOsKDR0cll4kuoNbKO4G6KsF+2S6EHlFo1d/5dpcSys9CfUu5+W+aPDmtj8QQqyCLTJ9LH I7l/V0e+cL2qPK3IrqQ0Aee+DfUi3b1AZ13Lfgk9GlgKCd50Ee2dBkA7eq07PAItN4eyPZQrfvvtW4 yL3LBUFUy6u1QQh6VthSfwKccfP9tQTGi8HYX4+ylmAUpE22iEiX5zTe/fortEdfvzvng1owx9T9ts 9oZHuqVczFVK0m77+Cfv6mGvcMoLYzUJc7haSxVUqBrIcta3Q9NEyFol+T8g== X-Developer-Key: i=keescook@chromium.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org As already done in GrapheneOS, add the __alloc_size attribute for appropriate percpu allocator interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Note that due to the implementation of the percpu API, this is unlikely to ever actually provide compile-time checking beyond very simple non-SMP builds. But, since they are technically allocators, mark them as such. Cc: Dennis Zhou Cc: Tejun Heo Cc: Christoph Lameter Cc: Andy Whitcroft Cc: David Rientjes Cc: Dwaipayan Ray Cc: Joe Perches Cc: Joonsoo Kim Cc: Lukas Bulwahn Cc: Miguel Ojeda Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Pekka Enberg Cc: Vlastimil Babka Co-developed-by: Daniel Micay Signed-off-by: Daniel Micay Signed-off-by: Kees Cook Acked-by: Dennis Zhou --- include/linux/percpu.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 5e76af742c80..98a9371133f8 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -123,7 +123,7 @@ extern int __init pcpu_page_first_chunk(size_t reserved_size, pcpu_fc_populate_pte_fn_t populate_pte_fn); #endif -extern void __percpu *__alloc_reserved_percpu(size_t size, size_t align); +extern void __percpu *__alloc_reserved_percpu(size_t size, size_t align) __alloc_size(1); extern bool __is_kernel_percpu_address(unsigned long addr, unsigned long *can_addr); extern bool is_kernel_percpu_address(unsigned long addr); @@ -131,8 +131,8 @@ extern bool is_kernel_percpu_address(unsigned long addr); extern void __init setup_per_cpu_areas(void); #endif -extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp); -extern void __percpu *__alloc_percpu(size_t size, size_t align); +extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) __alloc_size(1); +extern void __percpu *__alloc_percpu(size_t size, size_t align) __alloc_size(1); extern void free_percpu(void __percpu *__pdata); extern phys_addr_t per_cpu_ptr_to_phys(void *addr);