From patchwork Thu Aug 22 23:13:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 13774343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7C61C3DA4A for ; Thu, 22 Aug 2024 23:13:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07C0780072; Thu, 22 Aug 2024 19:13:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 007B48005A; Thu, 22 Aug 2024 19:13:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D9B0080072; Thu, 22 Aug 2024 19:13:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id AF0FB8005A for ; Thu, 22 Aug 2024 19:13:39 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 32BA7121A7C for ; Thu, 22 Aug 2024 23:13:39 +0000 (UTC) X-FDA: 82481435358.15.9ED6501 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf12.hostedemail.com (Postfix) with ESMTP id C276F40008 for ; Thu, 22 Aug 2024 23:13:35 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="JzP/gGLA"; spf=pass (imf12.hostedemail.com: domain of kees@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=kees@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724368335; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=g3w89kbAhbeUdZmYGvUZB0OI24ywFnAWSVsXZJwstmM=; b=CyXHXhKSSkax60o+Hff0NZ/j6e7bQd+yki4HoHwoJZFwXmfrCoBbLsD80Oj+AcC/CxC2FL 16SXIXO2c46+8K71Ap8JqQQiWa20T21o8FO391rCge2rxCJpKXWXyVznLg/2K4KzfpTG5N QoESDheiUKwpyx4oWK3NE+C9CijjR+M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724368335; a=rsa-sha256; cv=none; b=x0d8BmZPz/fpts4ad9vQDPY4ZOmNVh93erabr4b65+pVa26saOwpwKoOJL9anfVEtRgWFS +8cfMj1zd/bcX9WrBwe/GLWbNq4bTAqHBHPMdxP8XtIqWEUCg2SD7ofrVKClTwDAvWk4ym gtA9DQSHy7jV4SkDJTbVa1zSLNGAWRs= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="JzP/gGLA"; spf=pass (imf12.hostedemail.com: domain of kees@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=kees@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 1E38BCE104A; Thu, 22 Aug 2024 23:13:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42494C32782; Thu, 22 Aug 2024 23:13:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1724368411; bh=TAMX8QsL8uiGQwqtkfnQTvM5gnjyCIZExznlqwceoRA=; h=From:To:Cc:Subject:Date:From; b=JzP/gGLA5YDMA58IOT1SDJU7jwf+zCJch5lIg3m/Y6e/QZCtCadWqXiEHyuiEGHss 9peWmFvp6HX4cn/p+o/7diyBnu9YPKQeoVOXvgfe4ypJg3aWcX9qS548jtLImkXIoO jNdixRTkAebLsqTkNgTJ1TtohV8SStjiN34FprDMpSJN6K7bcKfTYwVjcsoDBQw2t7 gBjKaJ8nNj7dyfi1VXtDr14cikLb4zC1MkWeheg+L3HtMSRQVbqVwMEPEOLncPwvvA YMLw8vcBmmFiIEuorlevB2MSOFktNZB5FNnNyAFFG1EgHR8Xy2VGrc7n0mjzYqfruz Zqz7WqTyWk9RQ== From: Kees Cook To: Vlastimil Babka Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Gustavo A . R . Silva" , Bill Wendling , Justin Stitt , Jann Horn , Przemek Kitszel , Marco Elver , linux-mm@kvack.org, Nathan Chancellor , Nick Desaulniers , linux-kernel@vger.kernel.org, llvm@lists.linux.dev, linux-hardening@vger.kernel.org Subject: [PATCH v3] slab: Introduce kmalloc_obj() and family Date: Thu, 22 Aug 2024 16:13:28 -0700 Message-Id: <20240822231324.make.666-kees@kernel.org> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=15177; i=kees@kernel.org; h=from:subject:message-id; bh=TAMX8QsL8uiGQwqtkfnQTvM5gnjyCIZExznlqwceoRA=; b=owGbwMvMwCVmps19z/KJym7G02pJDGnHj0nsn5XNE7b7V3nIbX6X/Wq6xzdU1T6f2qJg8TP42 zOOsLeXO0pZGMS4GGTFFFmC7NzjXDzetoe7z1WEmcPKBDKEgYtTACai9p6R4Zep6y75CbrmKyb5 ryhjf6isubfGf/li7vMfPp6+vjbrqTojw/Lyi/97123mmqLlmH/L82A9M+uiSIU7C0M5zew7TCU f8QEA X-Developer-Key: i=kees@kernel.org; a=openpgp; fpr=A5C3F68F229DD60F723E6E138972F4DFDC6DC026 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C276F40008 X-Stat-Signature: id8rf3smsjoj5keiaptzjnmhi38ygexs X-HE-Tag: 1724368415-181053 X-HE-Meta: U2FsdGVkX1+HpzRLKa6/vvdtihi9OWUeUFjHUIA9moDgw0Ilvkm+XaHvSx0QDdYxDWsJOfRZLOPpLkKuDjYdpd0YGS5zSvF3qDG33wSaK4Rq4h/InXVby2EOkcyq411MrV55SQ+81zd6wV4P9k1HvmDJt0Ri9xf/jJVH/hrgZ+0goH+AEvlnZqEa9q1P5lUmhbOnI4aSFEqZiUYqGNGeP6WzugK2kF/4m3Z9G/HbK96i53np2R1U/AIBvAO62lj2CjdV30QcHwfU8HTDMBSI+a/YzF8MIV4GRElkiYZsSXnl7zS+qNeg7EZmOTzPmUBc3Zixo5pVBroOmKxrGD4GkVhnlSlmyaRspQOx3zvQPkazlok5OUTEmh71Qx9tLHhsmN0h6QVhMVn/G/5lVZM8Rsj8eXVnEJz4/F2hwDYA7gXk8gl+XHQK1+U51yMtuS+HT4Ll2yX7i2/5ZOOh908m2D21utVCIjrFbHqXkyC9tbw9tgbxdpJlCYsARTnCR+pXCCL2iYgxfTjW3THNgBPlM0iRZKIt3XSlsF47Adk8TZXItUdgZJVu/uMxkue+L73WIo9LZRlHMZfxvxrpFw+h5Q4GQz90VfIDzzOYvhUkoc+C+JleGO/fzbiIWSJEJBjf5fgvjlT4sefRgjSz079UpDutmPlXtVJGHShq8vTtJXJxxw/MpVKWRrDvzMCXbfXzmPuC2Jb5UpCR6nxU3oQhwHy7jFnmAftDUAWGr4AX03iyX96ZwUgzH6hOl7SeAu8ryfn7G1KLZZMDA7U4hbFAWWPPkdLfgdHseOJMWQgyl8LjS0PwIb0ps5s8+1PD1p5MBdwHPF4be25tMimN4oH7kgSL76RkCkRnre5MhykTxHA5/XmT8+i70fFdy+IT0UzZxE2yNUAP+DWBs/qLLy9iKLJS9LgGt04P/Sjbm/fzF0a/GXVe5Il8WH91ZGE4+4S8HiVyK1Azq/K3moQaAjY +Ub0id11 jV7ok/RMd0uhRjLWPBELEwaRz4RKQ28ffFncym6U3qWmHgVfxfI79WXJagEnRDKjn0Zh5Nt1GM43aiDv+6Ch7Q8hRfJ7tUaCe4z3HpqHzNYuGykzExwu5YVU61MdtMh9ELZOTw3t7CoTaY4XIVHZ1WTAyoU3NTUCTZwfu5HKYZ0irlAg1jg7KrwmXwW+p2PwP/oSvMEBvpuPnG8XwRZw0u42ckSNh95MI8UUN8sixDe9dVl01fhiU/JihJ8pfaSMHjfC3AkuLQn1uBacZ7B5aRoOKHt/jzJrslsjkr8d8dpwI4/Q4+cV4I7JjKzVSqLqs8UkLGEPDFBfteqe7Eahb0haAfVb5PA8UWL6ZGpJOK0Y8qo00iaehKjGmS3TEUJNL/UrFErj8brbq3wbZBb4wQmPF2aHeXOhdW/YVXoVNBvg5UJTQnc62sdaiJcV8gdfN2InXylBTkFFfM/IQrMX0Q0a0uFix+7eOGAE1ljpUOczlJy2qSyxoPcPOtq1cUJpUPBk/st/lzIuonM1yfgkK6xIBG3GG+px90XMsLCmiWURqj/Y26lW0S644VOhDlCmQ3cDGoVaFbEJtzYzV8HRkbZ8TF0EUzsS9gZdOKQ5TnbTziconTZKWalynCi1h3sXK2/tbOcuSPx90LJ6HqsoAGUbHpNa49jfYkkvPtNriTCyXy4JFsMl4DOoaMA7/hFyVeoeO44Qx681D16lbJt0Fvz8eo8vbG+S3Uj08Eg63SwP2kPw+cNTBZnUH0Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Introduce type-aware kmalloc-family helpers to replace the common idioms for single, array, and flexible object allocations: ptr = kmalloc(sizeof(*ptr), gfp); ptr = kzalloc(sizeof(*ptr), gfp); ptr = kmalloc_array(count, sizeof(*ptr), gfp); ptr = kcalloc(count, sizeof(*ptr), gfp); ptr = kmalloc(struct_size(ptr, flex_member, count), gfp); These become, respectively: kmalloc_obj(ptr, gfp); kzalloc_obj(ptr, gfp); kmalloc_objs(ptr, count, gfp); kzalloc_objs(ptr, count, gfp); kmalloc_flex(ptr, flex_member, count, gfp); These each return the assigned value of ptr (which may be NULL on failure). For cases where the total size of the allocation is needed, the kmalloc_obj_sz(), kmalloc_objs_sz(), and kmalloc_flex_sz() family of macros can be used. For example: info->size = struct_size(ptr, flex_member, count); ptr = kmalloc(info->size, gfp); becomes: kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size); Internal introspection of allocated type now becomes possible, allowing for future alignment-aware choices and hardening work. For example, adding __alignof(*ptr) as an argument to the internal allocators so that appropriate/efficient alignment choices can be made, or being able to correctly choose per-allocation offset randomization within a bucket that does not break alignment requirements. Introduces __flex_count() for when __builtin_get_counted_by() is added by GCC[1] and Clang[2]. The internal use of __flex_count() allows for automatically setting the counter member of a struct's flexible array member when it has been annotated with __counted_by(), avoiding any missed early size initializations while __counted_by() annotations are added to the kernel. Additionally, this also checks for "too large" allocations based on the type size of the counter variable. For example: if (count > type_max(ptr->flex_count)) fail...; info->size = struct_size(ptr, flex_member, count); ptr = kmalloc(info->size, gfp); ptr->flex_count = count; becomes (i.e. unchanged from earlier example): kmalloc_flex_sz(ptr, flex_member, count, gfp, &info->size); Replacing all existing simple code patterns found via Coccinelle[3] shows what could be replaced immediately (saving roughly 1,500 lines): 7040 files changed, 14128 insertions(+), 15557 deletions(-) Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116016 [1] Link: https://github.com/llvm/llvm-project/issues/99774 [2] Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/kmalloc_obj-assign-size.cocci [3] Signed-off-by: Kees Cook --- Initial testing looks good. Before I write all the self-tests, I just wanted to validate that the new API is reasonable (i.e. it is no longer using optional argument counts for choosing the internal API). v3: - Add .rst documentation - Add kern-doc - Return ptr instead of size by default - Add *_sz() variants that provide allocation size output - Implement __flex_counter() logic v2: https://lore.kernel.org/linux-hardening/20240807235433.work.317-kees@kernel.org/ v1: https://lore.kernel.org/linux-hardening/20240719192744.work.264-kees@kernel.org/ Cc: Vlastimil Babka Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Roman Gushchin Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Gustavo A. R. Silva Cc: Bill Wendling Cc: Justin Stitt Cc: Jann Horn Cc: Przemek Kitszel Cc: Marco Elver Cc: linux-mm@kvack.org --- Documentation/process/deprecated.rst | 41 +++++++ include/linux/compiler_types.h | 22 ++++ include/linux/slab.h | 174 +++++++++++++++++++++++++++ 3 files changed, 237 insertions(+) diff --git a/Documentation/process/deprecated.rst b/Documentation/process/deprecated.rst index 1f7f3e6c9cda..b22ec088a044 100644 --- a/Documentation/process/deprecated.rst +++ b/Documentation/process/deprecated.rst @@ -372,3 +372,44 @@ The helper must be used:: DECLARE_FLEX_ARRAY(struct type2, two); }; }; + +Open-coded kmalloc assignments +------------------------------ +Performing open-coded kmalloc()-family allocation assignments prevents +the kernel (and compiler) from being able to examine the type of the +variable being assigned, which limits any related introspection that +may help with alignment, wrap-around, or additional hardening. The +kmalloc_obj()-family of macros provide this introspection, which can be +used for the common code patterns for single, array, and flexible object +allocations. For example, these open coded assignments:: + + ptr = kmalloc(sizeof(*ptr), gfp); + ptr = kzalloc(sizeof(*ptr), gfp); + ptr = kmalloc_array(count, sizeof(*ptr), gfp); + ptr = kcalloc(count, sizeof(*ptr), gfp); + ptr = kmalloc(struct_size(ptr, flex_member, count), gfp); + +become, respectively:: + + kmalloc_obj(ptr, gfp); + kzalloc_obj(ptr, gfp); + kmalloc_objs(ptr, count, gfp); + kzalloc_objs(ptr, count, gfp); + kmalloc_flex(ptr, flex_member, count, gfp); + +For the cases where the total size of the allocation is also needed, +the kmalloc_obj_size(), kmalloc_objs_sz(), and kmalloc_flex_sz() family of +macros can be used. For example, converting these assignments:: + + total_size = struct_size(ptr, flex_member, count); + ptr = kmalloc(total_size, gfp); + +becomes:: + + kmalloc_flex_sz(ptr, flex_member, count, gfp, &total_size); + +If `ptr->flex_member` is annotated with __counted_by(), the allocation +will automatically fail if `count` is larger than the maximum +representable value that can be stored in the counter member associated +with `flex_member`. Similarly, the allocation will fail if the total +size of the allocation exceeds the maximum value `*total_size` can hold. diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index f14c275950b5..b99deae45210 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -421,6 +421,28 @@ struct ftrace_likely_data { #define __member_size(p) __builtin_object_size(p, 1) #endif +#if __has_builtin(__builtin_get_counted_by) +/** + * __flex_counter - Get pointer to counter member for the given + * flexible array, if it was annotated with __counted_by() + * @flex: Pointer to flexible array member of an addressable struct instance + * + * For example, with: + * + * struct foo { + * int counter; + * short array[] __counted_by(counter); + * } *p; + * + * __flex_counter(p->array) will resolve to &p->counter. + * + * If p->array is unannotated, this returns (void *)NULL. + */ +#define __flex_counter(flex) __builtin_get_counted_by(flex) +#else +#define __flex_counter(flex) ((void *)NULL) +#endif + /* * Some versions of gcc do not mark 'asm goto' volatile: * diff --git a/include/linux/slab.h b/include/linux/slab.h index eb2bf4629157..c37606b9e248 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -686,6 +686,180 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f } #define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) +#define __alloc_objs(ALLOC, P, COUNT, FLAGS, SIZE) \ +({ \ + size_t __obj_size = size_mul(sizeof(*P), COUNT); \ + const typeof(_Generic(SIZE, \ + void *: (size_t *)NULL, \ + default: SIZE)) __size_ptr = (SIZE); \ + typeof(P) __obj_ptr = NULL; \ + /* Does the total size fit in the *SIZE variable? */ \ + if (!__size_ptr || __obj_size <= type_max(*__size_ptr)) \ + __obj_ptr = ALLOC(__obj_size, FLAGS); \ + if (!__obj_ptr) \ + __obj_size = 0; \ + if (__size_ptr) \ + *__size_ptr = __obj_size; \ + (P) = __obj_ptr; \ +}) + +#define __alloc_flex(ALLOC, P, FAM, COUNT, FLAGS, SIZE) \ +({ \ + size_t __count = (COUNT); \ + size_t __obj_size = struct_size(P, FAM, __count); \ + const typeof(_Generic(SIZE, \ + void *: (size_t *)NULL, \ + default: SIZE)) __size_ptr = (SIZE); \ + typeof(P) __obj_ptr = NULL; \ + /* Just query the counter type for type_max checking. */ \ + typeof(_Generic(__flex_counter(__obj_ptr->FAM), \ + void *: (size_t *)NULL, \ + default: __flex_counter(__obj_ptr->FAM))) \ + __counter_type_ptr = NULL; \ + /* Does the count fit in the __counted_by counter member? */ \ + if ((__count <= type_max(*__counter_type_ptr)) && \ + /* Does the total size fit in the *SIZE variable? */ \ + (!__size_ptr || __obj_size <= type_max(*__size_ptr))) \ + __obj_ptr = ALLOC(__obj_size, FLAGS); \ + if (__obj_ptr) { \ + /* __obj_ptr now allocated so get real counter ptr. */ \ + typeof(_Generic(__flex_counter(__obj_ptr->FAM), \ + void *: (size_t *)NULL, \ + default: __flex_counter(__obj_ptr->FAM))) \ + __counter_ptr = __flex_counter(__obj_ptr->FAM); \ + if (__counter_ptr) \ + *__counter_ptr = __count; \ + } else { \ + __obj_size = 0; \ + } \ + if (__size_ptr) \ + *__size_ptr = __obj_size; \ + (P) = __obj_ptr; \ +}) + +/** + * kmalloc_obj - Allocate a single instance of the given structure + * @P: Pointer to hold allocation of the structure + * @FLAGS: GFP flags for the allocation + * + * Returns the newly allocated value of @P on success, NULL on failure. + * @P is assigned the result, either way. + */ +#define kmalloc_obj(P, FLAGS) \ + __alloc_objs(kmalloc, P, 1, FLAGS, NULL) +/** + * kmalloc_obj_sz - Allocate a single instance of the given structure and + * store total size + * @P: Pointer to hold allocation of the structure + * @FLAGS: GFP flags for the allocation + * @SIZE: Pointer to variable to hold the total allocation size + * + * Returns the newly allocated value of @P on success, NULL on failure. + * @P is assigned the result, either way. If @SIZE is non-NULL, the + * allocation will immediately fail if the total allocation size is larger + * than what the type of *@SIZE can represent. + */ +#define kmalloc_obj_sz(P, FLAGS, SIZE) \ + __alloc_objs(kmalloc, P, 1, FLAGS, SIZE) +/** + * kmalloc_objs - Allocate an array of the given structure + * @P: Pointer to hold allocation of the structure array + * @COUNT: How many elements in the array + * @FLAGS: GFP flags for the allocation + * + * Returns the newly allocated value of @P on success, NULL on failure. + * @P is assigned the result, either way. + */ +#define kmalloc_objs(P, COUNT, FLAGS) \ + __alloc_objs(kmalloc, P, COUNT, FLAGS, NULL) +/** + * kmalloc_objs_sz - Allocate an array of the given structure and store + * total size + * @P: Pointer to hold allocation of the structure array + * @COUNT: How many elements in the array + * @FLAGS: GFP flags for the allocation + * @SIZE: Pointer to variable to hold the total allocation size + * + * Returns the newly allocated value of @P on success, NULL on failure. + * @P is assigned the result, either way. If @SIZE is non-NULL, the + * allocation will immediately fail if the total allocation size is larger + * than what the type of *@SIZE can represent. + */ +#define kmalloc_objs_sz(P, COUNT, FLAGS, SIZE) \ + __alloc_objs(kmalloc, P, COUNT, FLAGS, SIZE) +/** + * kmalloc_flex - Allocate a single instance of the given flexible structure + * @P: Pointer to hold allocation of the structure + * @FAM: The name of the flexible array member of the structure + * @COUNT: How many flexible array member elements are desired + * @FLAGS: GFP flags for the allocation + * + * Returns the newly allocated value of @P on success, NULL on failure. + * @P is assigned the result, either way. If @FAM has been annotated with + * __counted_by(), the allocation will immediately fail if @COUNT is larger + * than what the type of the struct's counter variable can represent. + */ +#define kmalloc_flex(P, FAM, COUNT, FLAGS) \ + __alloc_flex(kmalloc, P, FAM, COUNT, FLAGS, NULL) + +/** + * kmalloc_flex_sz - Allocate a single instance of the given flexible + * structure and store total size + * @P: Pointer to hold allocation of the structure + * @FAM: The name of the flexible array member of the structure + * @COUNT: How many flexible array member elements are desired + * @FLAGS: GFP flags for the allocation + * @SIZE: Pointer to variable to hold the total allocation size + * + * Returns the newly allocated value of @P on success, NULL on failure. + * @P is assigned the result, either way. If @FAM has been annotated with + * __counted_by(), the allocation will immediately fail if @COUNT is larger + * than what the type of the struct's counter variable can represent. If + * @SIZE is non-NULL, the allocation will immediately fail if the total + * allocation size is larger than what the type of *@SIZE can represent. + */ +#define kmalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \ + __alloc_flex(kmalloc, P, FAM, COUNT, FLAGS, SIZE) + +#define kzalloc_obj(P, FLAGS) \ + __alloc_objs(kzalloc, P, 1, FLAGS, NULL) +#define kzalloc_obj_sz(P, FLAGS, SIZE) \ + __alloc_objs(kzalloc, P, 1, FLAGS, SIZE) +#define kzalloc_objs(P, COUNT, FLAGS) \ + __alloc_objs(kzalloc, P, COUNT, FLAGS, NULL) +#define kzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \ + __alloc_objs(kzalloc, P, COUNT, FLAGS, SIZE) +#define kzalloc_flex(P, FAM, COUNT, FLAGS) \ + __alloc_flex(kzalloc, P, FAM, COUNT, FLAGS, NULL) +#define kzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \ + __alloc_flex(kzalloc, P, FAM, COUNT, FLAGS, SIZE) + +#define kvmalloc_obj(P, FLAGS) \ + __alloc_objs(kvmalloc, P, 1, FLAGS, NULL) +#define kvmalloc_obj_sz(P, FLAGS, SIZE) \ + __alloc_objs(kvmalloc, P, 1, FLAGS, SIZE) +#define kvmalloc_objs(P, COUNT, FLAGS) \ + __alloc_objs(kvmalloc, P, COUNT, FLAGS, NULL) +#define kvmalloc_objs_sz(P, COUNT, FLAGS, SIZE) \ + __alloc_objs(kvmalloc, P, COUNT, FLAGS, SIZE) +#define kvmalloc_flex(P, FAM, COUNT, FLAGS) \ + __alloc_flex(kvmalloc, P, FAM, COUNT, FLAGS, NULL) +#define kvmalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \ + __alloc_flex(kvmalloc, P, FAM, COUNT, FLAGS, SIZE) + +#define kvzalloc_obj(P, FLAGS) \ + __alloc_objs(kvzalloc, P, 1, FLAGS, NULL) +#define kvzalloc_obj_sz(P, FLAGS, SIZE) \ + __alloc_objs(kvzalloc, P, 1, FLAGS, SIZE) +#define kvzalloc_objs(P, COUNT, FLAGS) \ + __alloc_objs(kvzalloc, P, COUNT, FLAGS, NULL) +#define kvzalloc_objs_sz(P, COUNT, FLAGS, SIZE) \ + __alloc_objs(kvzalloc, P, COUNT, FLAGS, SIZE) +#define kvzalloc_flex(P, FAM, COUNT, FLAGS) \ + __alloc_flex(kvzalloc, P, FAM, COUNT, FLAGS, NULL) +#define kvzalloc_flex_sz(P, FAM, COUNT, FLAGS, SIZE) \ + __alloc_flex(kvzalloc, P, FAM, COUNT, FLAGS, SIZE) + #define kmem_buckets_alloc(_b, _size, _flags) \ alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))