From patchwork Mon Jan 20 07:43:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11341197 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 176AE921 for ; Mon, 20 Jan 2020 07:44:11 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 741242073D for ; Mon, 20 Jan 2020 07:44:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="QrJVXlru" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 741242073D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17594-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 19779 invoked by uid 550); 20 Jan 2020 07:44:06 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 19688 invoked from network); 20 Jan 2020 07:44:05 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vGzxV+LL88B+fATt6abbCjZqtOsySmjlch7DJWcKIik=; b=QrJVXlruig0d5ecXvvbWcbKMcaX9f7bSq3DARYQAlfFXIYLoHZhe0LbKsZiw+/Pkhb Kqwn/mlr4ATbaCYDIVX2mYktnHVp6zNt5h0WTR1eNoomrO8vtOC5bN9V61quVFzBdmns X1eaQxmScrIri3kpRg8MH1wtaFeo3eYnH+fA0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vGzxV+LL88B+fATt6abbCjZqtOsySmjlch7DJWcKIik=; b=iHeW3YORWU5L3siacAzBq4cckAro9LVIAAHpreS8itE0OaXQ37bCRPx3Bdju2JT+4C 4kgr3fKJlsbUftlwOQWSxlV41srLbH+zJS8heHGW+ZLz2IOiQee3scYXV/y2OcMqpYhs qqynQWjpGKYTG2jT2FZHNkHaKX4sv4ym+nLqRGXdvs9xLyjg6ttDqqRGUly9FhseZn5u T2r6tMP5YVCcB6Ihe6aJSYmZd9r04HUY7K0CGF+KFGf7oKqtNfo/HWdqELl2dHeM39iR vHgiyvxbMghiK9NcRk28JMRCIlxe5uO2HCpt39C6vvM9H05Vny5ETMxJ7L7g2n1rmAh+ 8a3A== X-Gm-Message-State: APjAAAUOSRdZ93hfsSRHcHCBMyyRtHqH5tzcZM52cJqDKWL68JMrdSpj UCALMdv2BPttZIaFhPknkCiQM42kTnY= X-Google-Smtp-Source: APXvYqxRl1/EQDPVjdeMlWwspz86l1VCWkYGZLCYqdRqVMW2QddfAJWpVwh0yCdA65+wQtpZK0Lc5g== X-Received: by 2002:a17:90a:c385:: with SMTP id h5mr21966359pjt.122.1579506233119; Sun, 19 Jan 2020 23:43:53 -0800 (PST) From: Daniel Axtens To: kernel-hardening@lists.openwall.com, linux-mm@kvack.org, keescook@chromium.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, Daniel Axtens , "Igor M. Liplianin" Subject: [PATCH 1/5] altera-stapl: altera_get_note: prevent write beyond end of 'key' Date: Mon, 20 Jan 2020 18:43:40 +1100 Message-Id: <20200120074344.504-2-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200120074344.504-1-dja@axtens.net> References: <20200120074344.504-1-dja@axtens.net> MIME-Version: 1.0 altera_get_note is called from altera_init, where key is kzalloc(33). When the allocation functions are annotated to allow the compiler to see the sizes of objects, and with FORTIFY_SOURCE, we see: In file included from drivers/misc/altera-stapl/altera.c:14:0: In function ‘strlcpy’, inlined from ‘altera_init’ at drivers/misc/altera-stapl/altera.c:2189:5: include/linux/string.h:378:4: error: call to ‘__write_overflow’ declared with attribute error: detected write beyond size of object passed as 1st parameter __write_overflow(); ^~~~~~~~~~~~~~~~~~ That refers to this code in altera_get_note: if (key != NULL) strlcpy(key, &p[note_strings + get_unaligned_be32( &p[note_table + (8 * i)])], length); The error triggers because the length of 'key' is 33, but the copy uses length supplied as the 'length' parameter, which is always 256. Split the size parameter into key_len and val_len, and use the appropriate length depending on what is being copied. Detected by compiler error, only compile-tested. Cc: "Igor M. Liplianin" Signed-off-by: Daniel Axtens --- drivers/misc/altera-stapl/altera.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/misc/altera-stapl/altera.c b/drivers/misc/altera-stapl/altera.c index 25e5f24b3fec..5bdf57472314 100644 --- a/drivers/misc/altera-stapl/altera.c +++ b/drivers/misc/altera-stapl/altera.c @@ -2112,8 +2112,8 @@ static int altera_execute(struct altera_state *astate, return status; } -static int altera_get_note(u8 *p, s32 program_size, - s32 *offset, char *key, char *value, int length) +static int altera_get_note(u8 *p, s32 program_size, s32 *offset, + char *key, char *value, int keylen, int vallen) /* * Gets key and value of NOTE fields in the JBC file. * Can be called in two modes: if offset pointer is NULL, @@ -2170,7 +2170,7 @@ static int altera_get_note(u8 *p, s32 program_size, &p[note_table + (8 * i) + 4])]; if (value != NULL) - strlcpy(value, value_ptr, length); + strlcpy(value, value_ptr, vallen); } } @@ -2189,13 +2189,13 @@ static int altera_get_note(u8 *p, s32 program_size, strlcpy(key, &p[note_strings + get_unaligned_be32( &p[note_table + (8 * i)])], - length); + keylen); if (value != NULL) strlcpy(value, &p[note_strings + get_unaligned_be32( &p[note_table + (8 * i) + 4])], - length); + vallen); *offset = i + 1; } @@ -2449,7 +2449,7 @@ int altera_init(struct altera_config *config, const struct firmware *fw) __func__, (format_version == 2) ? "Jam STAPL" : "pre-standardized Jam 1.1"); while (altera_get_note((u8 *)fw->data, fw->size, - &offset, key, value, 256) == 0) + &offset, key, value, 32, 256) == 0) printk(KERN_INFO "%s: NOTE \"%s\" = \"%s\"\n", __func__, key, value); } From patchwork Mon Jan 20 07:43:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11341201 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 91EBD14B7 for ; Mon, 20 Jan 2020 07:44:17 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id C41A22077C for ; Mon, 20 Jan 2020 07:44:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="dqvOmH1r" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C41A22077C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17595-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 20173 invoked by uid 550); 20 Jan 2020 07:44:10 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 20039 invoked from network); 20 Jan 2020 07:44:09 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fVOHfWbnRfkMY9jVKGHgxhKH0IzU9LE8hgU/UaJj8Ts=; b=dqvOmH1r2nvUztCLA58ZwbEuJf7BOGi+nZoxmmYCNHHyGcka7PTp2lV5p6ZS1/syCe KGS/EcJNbw/j4Z4sxeSfhlGblfD0W0Kl34q6Wz3cxhLYeasgrOxerzhI9prk2Jp7fNE/ 6lhEH19aqm26US6aZi+UqHy4F0TaJV+BsT8DI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fVOHfWbnRfkMY9jVKGHgxhKH0IzU9LE8hgU/UaJj8Ts=; b=t7HnoGCCwaynTKoRYG/5SmlztnJ0iLR6J2hzp/RMMrVK5wcDYJhzN2ki9LgRJh/pxd TkjTMb/3IgpZN3GTQp6bPVYZ+9tYEBVZkzH9aVonGKTy0zieNpEDNPu3JGbFM7iw3A0p kiTUAgaWekjyrIcTEqn+rV0MqUBUTjphuCdJuRRC3j6AhpgTwBgAqYjcL3CqJVYcQRO9 pVHAccDJqZeDTMDjhRuE4A4Am0Al0xTaMVyx1jm97RFxbA9WuxK0ELqDqS+0gfikZTMR qM7rFOrREXREZ5LYCYuuzEvK/iJBMXGfrJCcQnBXsEjdrF5TkueFzwoFUoBwGwAmtY9N JQ6g== X-Gm-Message-State: APjAAAVAxK7WTiPWxGSYoat+0s9CzrlJu6P71XM4v2nhxj77qFgLXaKV xnr3tmKmQ+iBV591y8tblomEJRFM9x8= X-Google-Smtp-Source: APXvYqyZLyymGztJ/CNSEFtCTqcFFyQ8FVKljy9JVWAOg7npJe8SkBB6mSUg0KCA7RntzezXIRgmJg== X-Received: by 2002:a17:902:8303:: with SMTP id bd3mr13940869plb.171.1579506237050; Sun, 19 Jan 2020 23:43:57 -0800 (PST) From: Daniel Axtens To: kernel-hardening@lists.openwall.com, linux-mm@kvack.org, keescook@chromium.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, Daniel Axtens Subject: [PATCH 2/5] [RFC] kasan: kasan_test: hide allocation sizes from the compiler Date: Mon, 20 Jan 2020 18:43:41 +1100 Message-Id: <20200120074344.504-3-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200120074344.504-1-dja@axtens.net> References: <20200120074344.504-1-dja@axtens.net> MIME-Version: 1.0 We're about to annotate the allocation functions so that the complier will know the sizes of the allocated objects. This is then caught at compile time by both the testing in copy_to/from_user, and the testing in fortify. The simplest way I can find to obscure the size is to pass the memory through a WRITE_ONCE/READ_ONCE pair. Create a macro to obscure an object's size, and a kmalloc wrapper to return an object with an obscured size. Using these is sufficient to compile without error. Signed-off-by: Daniel Axtens --- lib/test_kasan.c | 48 +++++++++++++++++++++++++++++++++++------------- 1 file changed, 35 insertions(+), 13 deletions(-) diff --git a/lib/test_kasan.c b/lib/test_kasan.c index 328d33beae36..dbbecd75f1e3 100644 --- a/lib/test_kasan.c +++ b/lib/test_kasan.c @@ -20,9 +20,28 @@ #include #include #include +#include #include +/* + * obscure origin of a pointer, so we can test things that check + * the size of the underlying object + */ +#define OBSCURE_ORIGINAL_OBJECT(x) { \ + void *bounce; \ + WRITE_ONCE(bounce, x); \ + x = READ_ONCE(bounce); \ + } + +static inline void *obscured_kmalloc(size_t size, gfp_t flags) +{ + void *result, *bounce; + result = kmalloc(size, flags); + WRITE_ONCE(bounce, result); + return READ_ONCE(bounce); +} + /* * Note: test functions are marked noinline so that their names appear in * reports. @@ -34,7 +53,7 @@ static noinline void __init kmalloc_oob_right(void) size_t size = 123; pr_info("out-of-bounds to right\n"); - ptr = kmalloc(size, GFP_KERNEL); + ptr = obscured_kmalloc(size, GFP_KERNEL); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -50,7 +69,7 @@ static noinline void __init kmalloc_oob_left(void) size_t size = 15; pr_info("out-of-bounds to left\n"); - ptr = kmalloc(size, GFP_KERNEL); + ptr = obscured_kmalloc(size, GFP_KERNEL); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -67,6 +86,7 @@ static noinline void __init kmalloc_node_oob_right(void) pr_info("kmalloc_node(): out-of-bounds to right\n"); ptr = kmalloc_node(size, GFP_KERNEL, 0); + OBSCURE_ORIGINAL_OBJECT(ptr); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -86,7 +106,7 @@ static noinline void __init kmalloc_pagealloc_oob_right(void) * the page allocator fallback. */ pr_info("kmalloc pagealloc allocation: out-of-bounds to right\n"); - ptr = kmalloc(size, GFP_KERNEL); + ptr = obscured_kmalloc(size, GFP_KERNEL); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -136,7 +156,7 @@ static noinline void __init kmalloc_large_oob_right(void) * and does not trigger the page allocator fallback in SLUB. */ pr_info("kmalloc large allocation: out-of-bounds to right\n"); - ptr = kmalloc(size, GFP_KERNEL); + ptr = obscured_kmalloc(size, GFP_KERNEL); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -155,6 +175,7 @@ static noinline void __init kmalloc_oob_krealloc_more(void) pr_info("out-of-bounds after krealloc more\n"); ptr1 = kmalloc(size1, GFP_KERNEL); ptr2 = krealloc(ptr1, size2, GFP_KERNEL); + OBSCURE_ORIGINAL_OBJECT(ptr2); if (!ptr1 || !ptr2) { pr_err("Allocation failed\n"); kfree(ptr1); @@ -174,6 +195,7 @@ static noinline void __init kmalloc_oob_krealloc_less(void) pr_info("out-of-bounds after krealloc less\n"); ptr1 = kmalloc(size1, GFP_KERNEL); ptr2 = krealloc(ptr1, size2, GFP_KERNEL); + OBSCURE_ORIGINAL_OBJECT(ptr2); if (!ptr1 || !ptr2) { pr_err("Allocation failed\n"); kfree(ptr1); @@ -190,7 +212,7 @@ static noinline void __init kmalloc_oob_16(void) } *ptr1, *ptr2; pr_info("kmalloc out-of-bounds for 16-bytes access\n"); - ptr1 = kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL); + ptr1 = obscured_kmalloc(sizeof(*ptr1) - 3, GFP_KERNEL); ptr2 = kmalloc(sizeof(*ptr2), GFP_KERNEL); if (!ptr1 || !ptr2) { pr_err("Allocation failed\n"); @@ -209,7 +231,7 @@ static noinline void __init kmalloc_oob_memset_2(void) size_t size = 8; pr_info("out-of-bounds in memset2\n"); - ptr = kmalloc(size, GFP_KERNEL); + ptr = obscured_kmalloc(size, GFP_KERNEL); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -225,7 +247,7 @@ static noinline void __init kmalloc_oob_memset_4(void) size_t size = 8; pr_info("out-of-bounds in memset4\n"); - ptr = kmalloc(size, GFP_KERNEL); + ptr = obscured_kmalloc(size, GFP_KERNEL); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -242,7 +264,7 @@ static noinline void __init kmalloc_oob_memset_8(void) size_t size = 8; pr_info("out-of-bounds in memset8\n"); - ptr = kmalloc(size, GFP_KERNEL); + ptr = obscured_kmalloc(size, GFP_KERNEL); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -258,7 +280,7 @@ static noinline void __init kmalloc_oob_memset_16(void) size_t size = 16; pr_info("out-of-bounds in memset16\n"); - ptr = kmalloc(size, GFP_KERNEL); + ptr = obscured_kmalloc(size, GFP_KERNEL); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -274,7 +296,7 @@ static noinline void __init kmalloc_oob_in_memset(void) size_t size = 666; pr_info("out-of-bounds in memset\n"); - ptr = kmalloc(size, GFP_KERNEL); + ptr = obscured_kmalloc(size, GFP_KERNEL); if (!ptr) { pr_err("Allocation failed\n"); return; @@ -479,7 +501,7 @@ static noinline void __init copy_user_test(void) size_t size = 10; int unused; - kmem = kmalloc(size, GFP_KERNEL); + kmem = obscured_kmalloc(size, GFP_KERNEL); if (!kmem) return; @@ -599,7 +621,7 @@ static noinline void __init kasan_memchr(void) size_t size = 24; pr_info("out-of-bounds in memchr\n"); - ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); + ptr = obscured_kmalloc(size, GFP_KERNEL | __GFP_ZERO); if (!ptr) return; @@ -614,7 +636,7 @@ static noinline void __init kasan_memcmp(void) int arr[9]; pr_info("out-of-bounds in memcmp\n"); - ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); + ptr = obscured_kmalloc(size, GFP_KERNEL | __GFP_ZERO); if (!ptr) return; From patchwork Mon Jan 20 07:43:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11341203 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E474A921 for ; Mon, 20 Jan 2020 07:44:24 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 4D30C207E0 for ; Mon, 20 Jan 2020 07:44:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="VuL5LjlK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4D30C207E0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17596-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 21628 invoked by uid 550); 20 Jan 2020 07:44:14 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 21528 invoked from network); 20 Jan 2020 07:44:13 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cgbwx4pPrUton+YWvUC8+XEHkB4TEyRZFBvrMu0o0Xs=; b=VuL5LjlKzy5In5eNW/F5iZwG9YFS55bxJ5qNflJfa7xgPiL5jeyn4MVx6HIXtqakdc mlWSi8vEjgA+5iEwRXltmyveZnSvqffSizU8pgqORRZe+7ND4OEjm3+oEkOTSUgm/shH tDykiKnwoifmqkjgslih0ocm9GPAeGNHcGHQE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cgbwx4pPrUton+YWvUC8+XEHkB4TEyRZFBvrMu0o0Xs=; b=Ab+QKaZ3as+30kfkueYJSkt5EVNa4N/5JVWWuT7cGBaG3fR5gjJpPXy0DE5iJp7SrA 9VN8OfozCpmcj2DDKmy6IPalZSjEfH4nExmm/4IyOPH6drtL7DYnqaazKswUcb+KDCXR DsI2AWG5vDzfZ6imGTYbWz2xLZ8J+CyIa5hmp0UYg6bxmspriJjsc1W7BF95OXPbEBZe iiw4OzUjGw7v5Mt/zh60cgW+XoBWDvFc4aEC6MyTozCxUV525OePHHB+09MuNpQTKoOG C+RROYch86q1jpwN6KTIUUcp/ZogLDHRaoMp0tkr2Z178tfCze6AzKJm0oOYw9cnbrsu /9Ig== X-Gm-Message-State: APjAAAUQqU+wXIN/sQqYtdtEKM1QVChuee88dl8G/xyiBhLKapmCm28c APXodNTaKeU2Ly3rNS5AjArr2kMv0t8= X-Google-Smtp-Source: APXvYqyZlAaoK52ybK4EBOzspAzOfrFduSZoB1YaLPW36ZGZBdpEQmZVkbWxdL3Re8WEtuUuz3FE4Q== X-Received: by 2002:aa7:9ec9:: with SMTP id r9mr15945385pfq.85.1579506240797; Sun, 19 Jan 2020 23:44:00 -0800 (PST) From: Daniel Axtens To: kernel-hardening@lists.openwall.com, linux-mm@kvack.org, keescook@chromium.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, Daniel Axtens Subject: [PATCH 3/5] [RFC] staging: rts5208: make len a u16 in rtsx_write_cfg_seq Date: Mon, 20 Jan 2020 18:43:42 +1100 Message-Id: <20200120074344.504-4-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200120074344.504-1-dja@axtens.net> References: <20200120074344.504-1-dja@axtens.net> MIME-Version: 1.0 A warning occurs when vzalloc is annotated in a subsequent patch to tell the compiler that its parameter is an allocation size: drivers/staging/rts5208/rtsx_chip.c: In function ‘rtsx_write_cfg_seq’: drivers/staging/rts5208/rtsx_chip.c:1453:7: warning: argument 1 value ‘18446744073709551615’ exceeds maximum object size 9223372036854775807 [-Walloc-size-larger-than=] data = vzalloc(array_size(dw_len, 4)); ~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This occurs because len and dw_len are signed integers and the parameter to array_size is a size_t. If dw_len is a negative integer, it will become a very large positive number when cast to size_t. This could cause an overflow, so array_size(), will return SIZE_MAX _at compile time_. gcc then notices that this value is too large for an allocation and throws a warning. rtsx_write_cfg_seq is only called from write_cfg_byte in rtsx_scsi.c. There, len is a u16. So make len a u16 in rtsx_write_cfg_seq too. This means dw_len can never be negative, avoiding the potential overflow and the warning. This should not cause a functional change, but was compile tested only. Signed-off-by: Daniel Axtens --- drivers/staging/rts5208/rtsx_chip.c | 2 +- drivers/staging/rts5208/rtsx_chip.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/staging/rts5208/rtsx_chip.c b/drivers/staging/rts5208/rtsx_chip.c index 17c4131f5f62..4a8cbf7362f7 100644 --- a/drivers/staging/rts5208/rtsx_chip.c +++ b/drivers/staging/rts5208/rtsx_chip.c @@ -1432,7 +1432,7 @@ int rtsx_read_cfg_dw(struct rtsx_chip *chip, u8 func_no, u16 addr, u32 *val) } int rtsx_write_cfg_seq(struct rtsx_chip *chip, u8 func, u16 addr, u8 *buf, - int len) + u16 len) { u32 *data, *mask; u16 offset = addr % 4; diff --git a/drivers/staging/rts5208/rtsx_chip.h b/drivers/staging/rts5208/rtsx_chip.h index bac65784d4a1..9b0024557b7e 100644 --- a/drivers/staging/rts5208/rtsx_chip.h +++ b/drivers/staging/rts5208/rtsx_chip.h @@ -963,7 +963,7 @@ int rtsx_write_cfg_dw(struct rtsx_chip *chip, u8 func_no, u16 addr, u32 mask, u32 val); int rtsx_read_cfg_dw(struct rtsx_chip *chip, u8 func_no, u16 addr, u32 *val); int rtsx_write_cfg_seq(struct rtsx_chip *chip, - u8 func, u16 addr, u8 *buf, int len); + u8 func, u16 addr, u8 *buf, u16 len); int rtsx_read_cfg_seq(struct rtsx_chip *chip, u8 func, u16 addr, u8 *buf, int len); int rtsx_write_phy_register(struct rtsx_chip *chip, u8 addr, u16 val); From patchwork Mon Jan 20 07:43:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11341205 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5E119921 for ; Mon, 20 Jan 2020 07:44:32 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id BAE26207E0 for ; Mon, 20 Jan 2020 07:44:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="IsVuaMki" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BAE26207E0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17597-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 22025 invoked by uid 550); 20 Jan 2020 07:44:17 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 21911 invoked from network); 20 Jan 2020 07:44:16 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4ceV5dzkz6HqCMW6fBoym4ypC/m9xh1QjTBMfyGPn8o=; b=IsVuaMkiPafyhTmbUmmKKq2zQ2knSaijf8xt11NZ6XL9Q4Hm1Cfdm8CztrO01x5ltK VnhPQHcNW2slNw6gRk8rL5iIUAbKO40Kuwiaqw1t/4WmuKa+tf45OOD70HBQ71fF1tLO +5Vv23deCqiMdkkxkLo3lvuCj1D8aE7yy6hs4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4ceV5dzkz6HqCMW6fBoym4ypC/m9xh1QjTBMfyGPn8o=; b=RD5T5IJKLILjujVTu1yjDZwBBTodfi6zQ4X6DHjHptpLEU8Obwrqp+i9vCxUbCLzQL fekCfFnyvBjr1j3QRWDTAoec4oodvLvX1I/dU+ZePSdh1zJHst5jlvdKYmNeF0RSJ56O QVoeRwMdW9ut8nkGWbBoTNWbBcHOWxju6US0l4AETAiI2IdFc9gRk+Ys+/pB6DxUxwHZ X9eCRhx+nsXN6/xvzqyI2wKbFCYeV9jADGkUJUDhvp4IdScFdgnL4lv8u3aUfoIl7xhH FFMmmZvc6GPp89pbkTWFgMh12G3KCW6sfbMmRql3RThGP9OHPvoSYEwgWpbjach2oZ8E tB7Q== X-Gm-Message-State: APjAAAXbOqObSuZD8QKJPLRhO7jUjCPdq3O2DmDpvPb4ULaZCQA4mm4+ QBAj1iTRUx1kwdFk9EOO80nCGCKQtSc= X-Google-Smtp-Source: APXvYqxpn7k/Ufr5n0ApK06TsYbJxU+ior64MbZ5pNsKbxSZFpNAZBslVxeMWsQFy8/9nF92ZyhObQ== X-Received: by 2002:a63:cc4a:: with SMTP id q10mr58040093pgi.241.1579506244663; Sun, 19 Jan 2020 23:44:04 -0800 (PST) From: Daniel Axtens To: kernel-hardening@lists.openwall.com, linux-mm@kvack.org, keescook@chromium.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, Daniel Axtens Subject: [PATCH 4/5] [VERY RFC] mm: kmalloc(_node): return NULL immediately for SIZE_MAX Date: Mon, 20 Jan 2020 18:43:43 +1100 Message-Id: <20200120074344.504-5-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200120074344.504-1-dja@axtens.net> References: <20200120074344.504-1-dja@axtens.net> MIME-Version: 1.0 kmalloc is sometimes compiled with an size that at compile time may be equal to SIZE_MAX. For example, struct_size(struct, array member, array elements) returns the size of a structure that has an array as the last element, containing a given number of elements, or SIZE_MAX on overflow. However, struct_size operates in (arguably) unintuitive ways at compile time. Consider the following snippet: struct foo { int a; int b[0]; }; struct foo *alloc_foo(int elems) { struct foo *result; size_t size = struct_size(result, b, elems); if (__builtin_constant_p(size)) { BUILD_BUG_ON(size == SIZE_MAX); } result = kmalloc(size, GFP_KERNEL); return result; } I expected that size would only be constant if alloc_foo() was called within that translation unit with a constant number of elements, and the compiler had decided to inline it. I'd therefore expect that 'size' is only SIZE_MAX if the constant provided was a huge number. However, instead, this function hits the BUILD_BUG_ON, even if never called. include/linux/compiler.h:394:38: error: call to ‘__compiletime_assert_32’ declared with attribute error: BUILD_BUG_ON failed: size == SIZE_MAX This is with gcc 9.2.1, and I've also observed it with an gcc 8 series compiler. My best explanation of this is: - elems is a signed int, so a small negative number will become a very large unsigned number when cast to a size_t, leading to overflow. - Then, the only way in which size can be a constant is if we hit the overflow case, in which 'size' will be 'SIZE_MAX'. - So the compiler takes that value into the body of the if statement and blows up. But I could be totally wrong. Anyway, this is relevant to slab.h because kmalloc() and kmalloc_node() check if the supplied size is a constant and take a faster path if so. A number of callers of those functions use struct_size to determine the size of a memory allocation. Therefore, at compile time, those functions will go down the constant path, specialising for the overflow case. When my next patch is applied, gcc will then throw a warning any time kmalloc_large could be called with a SIZE_MAX size, as gcc deems SIZE_MAX to be too big an allocation. So, make functions that check __builtin_constant_p check also against SIZE_MAX in the constant path, and immediately return NULL if we hit it. This brings kmalloc() and kmalloc_node() into line with the array functions kmalloc_array() and kmalloc_array_node() for the overflow case. The overall compiled size change per bloat-o-meter is in the noise (a reduction of <0.01%). Signed-off-by: Daniel Axtens --- include/linux/slab.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/slab.h b/include/linux/slab.h index 03a389358562..8141c6b1882a 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -544,6 +544,9 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags) #ifndef CONFIG_SLOB unsigned int index; #endif + if (unlikely(size == SIZE_MAX)) + return NULL; + if (size > KMALLOC_MAX_CACHE_SIZE) return kmalloc_large(size, flags); #ifndef CONFIG_SLOB @@ -562,6 +565,9 @@ static __always_inline void *kmalloc(size_t size, gfp_t flags) static __always_inline void *kmalloc_node(size_t size, gfp_t flags, int node) { + if (__builtin_constant_p(size) && size == SIZE_MAX) + return NULL; + #ifndef CONFIG_SLOB if (__builtin_constant_p(size) && size <= KMALLOC_MAX_CACHE_SIZE) { From patchwork Mon Jan 20 07:43:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11341207 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9570814B7 for ; Mon, 20 Jan 2020 07:44:40 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 9A6222077C for ; Mon, 20 Jan 2020 07:44:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="NenyvpxA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A6222077C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17598-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 22418 invoked by uid 550); 20 Jan 2020 07:44:21 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 22281 invoked from network); 20 Jan 2020 07:44:20 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q9naSqUxzHPFtc57EHQL2H0oZQeWzVS7OaqvwD1FRfU=; b=NenyvpxABKd6hOcqPhh83eoyF/gaiZOh+OeEvInZfRRI5Zokd3GFOAIlA2nobLePtG Xd9NwHEJBKVJ9xd7+v9ystKIqbw060Bco8cqIiqjgcjmjMwqC7nvTxi6xupNsuzZeTMs E4s3sqs4MNeEmMD9aR2ToUAC/zQCLo1LKVqWU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q9naSqUxzHPFtc57EHQL2H0oZQeWzVS7OaqvwD1FRfU=; b=iq9bhi3IKnv1ik0cdSaqfRbnTC5q1KB57F683J7cQYwDYnPB4T9C8DxcHe5d9maJxh qkaZVHdEWhOORyVyJtJA17ZQYn+4eRUTpc1ZzzDjDWSPQlDvhTtP2mUqxgZlm1DPTTCQ 23jrspJlPdCiPxvGbg83Ag9Gh53pwnAsoEVOVoIR121h73ypgcchc/BRljE0u8lAerOf HlozWlLAsFxk40hu2k420NvPVkZQHUBRuxiVDhQ9xxlsLZZHEyAzzDZw/Cyv6KmOtb+X gJGBfqOxpoz+lnRiNAA+KTPFGqlZVv+icxHIIom2CINr7aLTlmxo14or1rl9KLJahy34 Sq6g== X-Gm-Message-State: APjAAAWyXsUiCWmbeicFF9d4eLxHRc2BwE95GQ2UJKicPjOwtO5fGYQt pjiFvfEqKZEuZ/lRQqXSD0WAzZDINIg= X-Google-Smtp-Source: APXvYqxLeiWwe5nyCHjHlUvo/yCR2up/7I6JYzI4pzU3HwwuWU199gXkrFBh0vomggtFROnkeJlIXA== X-Received: by 2002:a17:902:8f94:: with SMTP id z20mr13991205plo.62.1579506248432; Sun, 19 Jan 2020 23:44:08 -0800 (PST) From: Daniel Axtens To: kernel-hardening@lists.openwall.com, linux-mm@kvack.org, keescook@chromium.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, Daniel Axtens , Daniel Micay Subject: [PATCH 5/5] [RFC] mm: annotate memory allocation functions with their sizes Date: Mon, 20 Jan 2020 18:43:44 +1100 Message-Id: <20200120074344.504-6-dja@axtens.net> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200120074344.504-1-dja@axtens.net> References: <20200120074344.504-1-dja@axtens.net> MIME-Version: 1.0 gcc and clang support the alloc_size attribute. Quoting : alloc_size (position) alloc_size (position-1, position-2) The alloc_size attribute may be applied to a function that returns a pointer and takes at least one argument of an integer or enumerated type. It indicates that the returned pointer points to memory whose size is given by the function argument at position-1, or by the product of the arguments at position-1 and position-2. Meaningful sizes are positive values less than PTRDIFF_MAX. GCC uses this information to improve the results of __builtin_object_size. gcc supports this back to at least 4.3.6 [1], and clang has supported it since December 2016 [2]. I think this is sufficent to make it always-on. Annotate the kmalloc and vmalloc family: where a memory allocation has a size knowable at compile time, allow the compiler to use that for __builtin_object_size() calculations. There are a couple of limitations: * only functions that return a single pointer can be directly annotated * only functions that take the size as a parameter (or as the product of two parameters) can be directly annotated. These could possibly be addressed in future with some hackery. This is useful for two things: * __builtin_object_size() is used in fortify and copy_to/from_user to find bugs at compile time and run time. * knowing the size allows the compiler to inline things when using __builtin_* functions. With my config with FORTIFY_SOURCE enabled I see a number of strlcpys being converted into a strlen and inline memcpy. This leads to an overall size increase of 0.04% (per bloat-o-meter) when compiled with -O2. [1]: https://gcc.gnu.org/onlinedocs/gcc-4.3.6/gcc/Function-Attributes.html#Function-Attributes [2]: https://reviews.llvm.org/D14274 Cc: Kees Cook Cc: Daniel Micay Signed-off-by: Daniel Axtens --- include/linux/compiler_attributes.h | 6 +++++ include/linux/kasan.h | 12 ++++----- include/linux/slab.h | 38 ++++++++++++++++++----------- include/linux/vmalloc.h | 26 ++++++++++---------- 4 files changed, 49 insertions(+), 33 deletions(-) diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index cdf016596659..ccacbb2f2c56 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -56,6 +56,12 @@ #define __aligned(x) __attribute__((__aligned__(x))) #define __aligned_largest __attribute__((__aligned__)) +/* + * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-alloc_005fsize-function-attribute + * clang: https://clang.llvm.org/docs/AttributeReference.html#alloc-size + */ +#define __alloc_size(a, ...) __attribute__((alloc_size(a, ## __VA_ARGS__))) + /* * Note: users of __always_inline currently do not write "inline" themselves, * which seems to be required by gcc to apply the attribute according diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 5cde9e7c2664..a8da784c98ad 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -53,13 +53,13 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, const void *object); void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, - gfp_t flags); + gfp_t flags) __alloc_size(2); void kasan_kfree_large(void *ptr, unsigned long ip); void kasan_poison_kfree(void *ptr, unsigned long ip); void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object, - size_t size, gfp_t flags); + size_t size, gfp_t flags) __alloc_size(3); void * __must_check kasan_krealloc(const void *object, size_t new_size, - gfp_t flags); + gfp_t flags) __alloc_size(2); void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); @@ -124,18 +124,18 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache, return (void *)object; } -static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) +static inline __alloc_size(2) void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) { return ptr; } static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} -static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object, +static inline __alloc_size(3) void *kasan_kmalloc(struct kmem_cache *s, const void *object, size_t size, gfp_t flags) { return (void *)object; } -static inline void *kasan_krealloc(const void *object, size_t new_size, +static inline __alloc_size(2) void *kasan_krealloc(const void *object, size_t new_size, gfp_t flags) { return (void *)object; diff --git a/include/linux/slab.h b/include/linux/slab.h index 8141c6b1882a..fbfc81f37374 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -184,7 +184,7 @@ void memcg_deactivate_kmem_caches(struct mem_cgroup *, struct mem_cgroup *); /* * Common kmalloc functions provided by all allocators */ -void * __must_check krealloc(const void *, size_t, gfp_t); +void * __must_check krealloc(const void *, size_t, gfp_t) __alloc_size(2); void kfree(const void *); void kzfree(const void *); size_t __ksize(const void *); @@ -389,7 +389,9 @@ static __always_inline unsigned int kmalloc_index(size_t size) } #endif /* !CONFIG_SLOB */ -void *__kmalloc(size_t size, gfp_t flags) __assume_kmalloc_alignment __malloc; +__assume_kmalloc_alignment __malloc __alloc_size(1) void * +__kmalloc(size_t size, gfp_t flags); + void *kmem_cache_alloc(struct kmem_cache *, gfp_t flags) __assume_slab_alignment __malloc; void kmem_cache_free(struct kmem_cache *, void *); @@ -413,8 +415,11 @@ static __always_inline void kfree_bulk(size_t size, void **p) } #ifdef CONFIG_NUMA -void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_kmalloc_alignment __malloc; -void *kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node) __assume_slab_alignment __malloc; +__assume_kmalloc_alignment __malloc __alloc_size(1) void * +__kmalloc_node(size_t size, gfp_t flags, int node); + +__assume_slab_alignment __malloc void * +kmem_cache_alloc_node(struct kmem_cache *, gfp_t flags, int node); #else static __always_inline void *__kmalloc_node(size_t size, gfp_t flags, int node) { @@ -428,12 +433,14 @@ static __always_inline void *kmem_cache_alloc_node(struct kmem_cache *s, gfp_t f #endif #ifdef CONFIG_TRACING -extern void *kmem_cache_alloc_trace(struct kmem_cache *, gfp_t, size_t) __assume_slab_alignment __malloc; +extern __alloc_size(3) void * +kmem_cache_alloc_trace(struct kmem_cache *, gfp_t, size_t); #ifdef CONFIG_NUMA -extern void *kmem_cache_alloc_node_trace(struct kmem_cache *s, - gfp_t gfpflags, - int node, size_t size) __assume_slab_alignment __malloc; +extern __assume_slab_alignment __malloc __alloc_size(4) void * +kmem_cache_alloc_node_trace(struct kmem_cache *s, + gfp_t gfpflags, + int node, size_t size); #else static __always_inline void * kmem_cache_alloc_node_trace(struct kmem_cache *s, @@ -445,8 +452,8 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s, #endif /* CONFIG_NUMA */ #else /* CONFIG_TRACING */ -static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, - gfp_t flags, size_t size) +static __always_inline __alloc_size(3) void * +kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t flags, size_t size) { void *ret = kmem_cache_alloc(s, flags); @@ -454,7 +461,7 @@ static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, return ret; } -static __always_inline void * +static __always_inline __alloc_size(4) void * kmem_cache_alloc_node_trace(struct kmem_cache *s, gfp_t gfpflags, int node, size_t size) @@ -466,10 +473,12 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s, } #endif /* CONFIG_TRACING */ -extern void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __malloc; +extern __assume_page_alignment __malloc __alloc_size(1) void * +kmalloc_order(size_t size, gfp_t flags, unsigned int order); #ifdef CONFIG_TRACING -extern void *kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) __assume_page_alignment __malloc; +extern __assume_page_alignment __malloc __alloc_size(1) void * +kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order); #else static __always_inline void * kmalloc_order_trace(size_t size, gfp_t flags, unsigned int order) @@ -645,7 +654,8 @@ static inline void *kcalloc_node(size_t n, size_t size, gfp_t flags, int node) #ifdef CONFIG_NUMA -extern void *__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long); +extern __alloc_size(1) void * +__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long); #define kmalloc_node_track_caller(size, flags, node) \ __kmalloc_node_track_caller(size, flags, node, \ _RET_IP_) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 0507a162ccd0..a3651bcc62a3 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -102,22 +102,22 @@ static inline void vmalloc_init(void) static inline unsigned long vmalloc_nr_pages(void) { return 0; } #endif -extern void *vmalloc(unsigned long size); -extern void *vzalloc(unsigned long size); -extern void *vmalloc_user(unsigned long size); -extern void *vmalloc_node(unsigned long size, int node); -extern void *vzalloc_node(unsigned long size, int node); -extern void *vmalloc_user_node_flags(unsigned long size, int node, gfp_t flags); -extern void *vmalloc_exec(unsigned long size); -extern void *vmalloc_32(unsigned long size); -extern void *vmalloc_32_user(unsigned long size); -extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot); +extern void *vmalloc(unsigned long size) __alloc_size(1); +extern void *vzalloc(unsigned long size) __alloc_size(1); +extern void *vmalloc_user(unsigned long size) __alloc_size(1); +extern void *vmalloc_node(unsigned long size, int node) __alloc_size(1); +extern void *vzalloc_node(unsigned long size, int node) __alloc_size(1); +extern void *vmalloc_user_node_flags(unsigned long size, int node, gfp_t flags) __alloc_size(1); +extern void *vmalloc_exec(unsigned long size) __alloc_size(1); +extern void *vmalloc_32(unsigned long size) __alloc_size(1); +extern void *vmalloc_32_user(unsigned long size) __alloc_size(1); +extern void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot) __alloc_size(1); extern void *__vmalloc_node_range(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, pgprot_t prot, unsigned long vm_flags, int node, - const void *caller); + const void *caller) __alloc_size(1); #ifndef CONFIG_MMU -extern void *__vmalloc_node_flags(unsigned long size, int node, gfp_t flags); +extern void *__vmalloc_node_flags(unsigned long size, int node, gfp_t flags) __alloc_size(1); static inline void *__vmalloc_node_flags_caller(unsigned long size, int node, gfp_t flags, void *caller) { @@ -125,7 +125,7 @@ static inline void *__vmalloc_node_flags_caller(unsigned long size, int node, } #else extern void *__vmalloc_node_flags_caller(unsigned long size, - int node, gfp_t flags, void *caller); + int node, gfp_t flags, void *caller) __alloc_size(1); #endif extern void vfree(const void *addr);