From patchwork Wed Sep 28 08:12:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12991758 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C24F0C6FA82 for ; Wed, 28 Sep 2022 08:13:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233971AbiI1INp (ORCPT ); Wed, 28 Sep 2022 04:13:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40784 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233965AbiI1INn (ORCPT ); Wed, 28 Sep 2022 04:13:43 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A5CD12E42E; Wed, 28 Sep 2022 01:13:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664352821; x=1695888821; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pZvjHMjYteGhKxH3FLMaRO1QcgBiv4Wimgneq4ceDH0=; b=ZoBY95OixsQwaGdqy9GlmKL25oWk9SAVD99h0HgxuQ3hoNDu1FiNZzxw l7Y0zKokWToLqNkGH7Z3BzSQ3mFzlpdPcMr54T5MLNokKHGHr51FZcd8z AmruWrmU8oh23zy6H4D+la0TOkAU/RHnOaeoQebJYWUY08CUDZAp6R3eh hP91lhr+UghYHjya/UINgSkZZ3t0ZPBSUkDd2ijwAWl+gQayNrmmnMqjo f2nbwFNUz7eu18UYHxtwISzpfNstBZrmOCRkZV1H3TUBwqul1DGWneHXL mwaBetKjLBqETlXJVPC+wUHRPcSpk406grhvuA9HFnbSYWP6900te6NN1 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="284671707" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="284671707" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:13:40 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="621836195" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="621836195" Received: from maciejos-mobl.ger.corp.intel.com (HELO paris.ger.corp.intel.com) ([10.249.147.47]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:13:22 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, mchehab@kernel.org, chris@chris-wilson.co.uk, matthew.auld@intel.com, thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, nirmoy.das@intel.com, airlied@redhat.com, daniel@ffwll.ch, andi.shyti@linux.intel.com, andrzej.hajda@intel.com, keescook@chromium.org, mauro.chehab@linux.intel.com, linux@rasmusvillemoes.dk, vitor@massaru.org, dlatypov@google.com, ndesaulniers@google.com, trix@redhat.com, llvm@lists.linux.dev, linux-hardening@vger.kernel.org, linux-sparse@vger.kernel.org, nathan@kernel.org, gustavoars@kernel.org, luc.vanoostenryck@gmail.com Subject: [PATCH v13 1/9] overflow: Allow mixed type arguments Date: Wed, 28 Sep 2022 11:12:52 +0300 Message-Id: <20220928081300.101516-2-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> References: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Kees Cook When the check_[op]_overflow() helpers were introduced, all arguments were required to be the same type to make the fallback macros simpler. However, now that the fallback macros have been removed[1], it is fine to allow mixed types, which makes using the helpers much more useful, as they can be used to test for type-based overflows (e.g. adding two large ints but storing into a u8), as would be handy in the drm core[2]. Remove the restriction, and add additional self-tests that exercise some of the mixed-type overflow cases, and double-check for accidental macro side-effects. [1] https://git.kernel.org/linus/4eb6bd55cfb22ffc20652732340c4962f3ac9a91 [2] https://lore.kernel.org/lkml/20220824084514.2261614-2-gwan-gyeong.mun@intel.com Cc: Rasmus Villemoes Cc: Andrzej Hajda Cc: "Gustavo A. R. Silva" Cc: Nick Desaulniers Cc: linux-hardening@vger.kernel.org Signed-off-by: Kees Cook Signed-off-by: Gwan-gyeong Mun Reviewed-by: Andrzej Hajda Reviewed-by: Gwan-gyeong Mun Tested-by: Gwan-gyeong Mun --- include/linux/overflow.h | 72 ++++++++++++++++------------ lib/overflow_kunit.c | 101 ++++++++++++++++++++++++++++----------- 2 files changed, 113 insertions(+), 60 deletions(-) diff --git a/include/linux/overflow.h b/include/linux/overflow.h index 0eb3b192f07a..19dfdd74835e 100644 --- a/include/linux/overflow.h +++ b/include/linux/overflow.h @@ -51,40 +51,50 @@ static inline bool __must_check __must_check_overflow(bool overflow) return unlikely(overflow); } -/* - * For simplicity and code hygiene, the fallback code below insists on - * a, b and *d having the same type (similar to the min() and max() - * macros), whereas gcc's type-generic overflow checkers accept - * different types. Hence we don't just make check_add_overflow an - * alias for __builtin_add_overflow, but add type checks similar to - * below. +/** check_add_overflow() - Calculate addition with overflow checking + * + * @a: first addend + * @b: second addend + * @d: pointer to store sum + * + * Returns 0 on success. + * + * *@d holds the results of the attempted addition, but is not considered + * "safe for use" on a non-zero return value, which indicates that the + * sum has overflowed or been truncated. */ -#define check_add_overflow(a, b, d) __must_check_overflow(({ \ - typeof(a) __a = (a); \ - typeof(b) __b = (b); \ - typeof(d) __d = (d); \ - (void) (&__a == &__b); \ - (void) (&__a == __d); \ - __builtin_add_overflow(__a, __b, __d); \ -})) +#define check_add_overflow(a, b, d) \ + __must_check_overflow(__builtin_add_overflow(a, b, d)) -#define check_sub_overflow(a, b, d) __must_check_overflow(({ \ - typeof(a) __a = (a); \ - typeof(b) __b = (b); \ - typeof(d) __d = (d); \ - (void) (&__a == &__b); \ - (void) (&__a == __d); \ - __builtin_sub_overflow(__a, __b, __d); \ -})) +/** check_sub_overflow() - Calculate subtraction with overflow checking + * + * @a: minuend; value to subtract from + * @b: subtrahend; value to subtract from @a + * @d: pointer to store difference + * + * Returns 0 on success. + * + * *@d holds the results of the attempted subtraction, but is not considered + * "safe for use" on a non-zero return value, which indicates that the + * difference has underflowed or been truncated. + */ +#define check_sub_overflow(a, b, d) \ + __must_check_overflow(__builtin_sub_overflow(a, b, d)) -#define check_mul_overflow(a, b, d) __must_check_overflow(({ \ - typeof(a) __a = (a); \ - typeof(b) __b = (b); \ - typeof(d) __d = (d); \ - (void) (&__a == &__b); \ - (void) (&__a == __d); \ - __builtin_mul_overflow(__a, __b, __d); \ -})) +/** check_mul_overflow() - Calculate multiplication with overflow checking + * + * @a: first factor + * @b: second factor + * @d: pointer to store product + * + * Returns 0 on success. + * + * *@d holds the results of the attempted multiplication, but is not + * considered "safe for use" on a non-zero return value, which indicates + * that the product has overflowed or been truncated. + */ +#define check_mul_overflow(a, b, d) \ + __must_check_overflow(__builtin_mul_overflow(a, b, d)) /** check_shl_overflow() - Calculate a left-shifted value and check overflow * diff --git a/lib/overflow_kunit.c b/lib/overflow_kunit.c index 7e3e43679b73..0d98c9bc75da 100644 --- a/lib/overflow_kunit.c +++ b/lib/overflow_kunit.c @@ -16,12 +16,15 @@ #include #include -#define DEFINE_TEST_ARRAY(t) \ - static const struct test_ ## t { \ - t a, b; \ - t sum, diff, prod; \ - bool s_of, d_of, p_of; \ - } t ## _tests[] +#define DEFINE_TEST_ARRAY_TYPED(t1, t2, t) \ + static const struct test_ ## t1 ## _ ## t2 ## __ ## t { \ + t1 a; \ + t2 b; \ + t sum, diff, prod; \ + bool s_of, d_of, p_of; \ + } t1 ## _ ## t2 ## __ ## t ## _tests[] + +#define DEFINE_TEST_ARRAY(t) DEFINE_TEST_ARRAY_TYPED(t, t, t) DEFINE_TEST_ARRAY(u8) = { {0, 0, 0, 0, 0, false, false, false}, @@ -222,21 +225,27 @@ DEFINE_TEST_ARRAY(s64) = { }; #endif -#define check_one_op(t, fmt, op, sym, a, b, r, of) do { \ - t _r; \ - bool _of; \ - \ - _of = check_ ## op ## _overflow(a, b, &_r); \ - KUNIT_EXPECT_EQ_MSG(test, _of, of, \ +#define check_one_op(t, fmt, op, sym, a, b, r, of) do { \ + int _a_orig = a, _a_bump = a + 1; \ + int _b_orig = b, _b_bump = b + 1; \ + bool _of; \ + t _r; \ + \ + _of = check_ ## op ## _overflow(a, b, &_r); \ + KUNIT_EXPECT_EQ_MSG(test, _of, of, \ "expected "fmt" "sym" "fmt" to%s overflow (type %s)\n", \ - a, b, of ? "" : " not", #t); \ - KUNIT_EXPECT_EQ_MSG(test, _r, r, \ + a, b, of ? "" : " not", #t); \ + KUNIT_EXPECT_EQ_MSG(test, _r, r, \ "expected "fmt" "sym" "fmt" == "fmt", got "fmt" (type %s)\n", \ - a, b, r, _r, #t); \ + a, b, r, _r, #t); \ + /* Check for internal macro side-effects. */ \ + _of = check_ ## op ## _overflow(_a_orig++, _b_orig++, &_r); \ + KUNIT_EXPECT_EQ_MSG(test, _a_orig, _a_bump, "Unexpected " #op " macro side-effect!\n"); \ + KUNIT_EXPECT_EQ_MSG(test, _b_orig, _b_bump, "Unexpected " #op " macro side-effect!\n"); \ } while (0) -#define DEFINE_TEST_FUNC(t, fmt) \ -static void do_test_ ## t(struct kunit *test, const struct test_ ## t *p) \ +#define DEFINE_TEST_FUNC_TYPED(n, t, fmt) \ +static void do_test_ ## n(struct kunit *test, const struct test_ ## n *p) \ { \ check_one_op(t, fmt, add, "+", p->a, p->b, p->sum, p->s_of); \ check_one_op(t, fmt, add, "+", p->b, p->a, p->sum, p->s_of); \ @@ -245,15 +254,18 @@ static void do_test_ ## t(struct kunit *test, const struct test_ ## t *p) \ check_one_op(t, fmt, mul, "*", p->b, p->a, p->prod, p->p_of); \ } \ \ -static void t ## _overflow_test(struct kunit *test) { \ +static void n ## _overflow_test(struct kunit *test) { \ unsigned i; \ \ - for (i = 0; i < ARRAY_SIZE(t ## _tests); ++i) \ - do_test_ ## t(test, &t ## _tests[i]); \ + for (i = 0; i < ARRAY_SIZE(n ## _tests); ++i) \ + do_test_ ## n(test, &n ## _tests[i]); \ kunit_info(test, "%zu %s arithmetic tests finished\n", \ - ARRAY_SIZE(t ## _tests), #t); \ + ARRAY_SIZE(n ## _tests), #n); \ } +#define DEFINE_TEST_FUNC(t, fmt) \ + DEFINE_TEST_FUNC_TYPED(t ## _ ## t ## __ ## t, t, fmt) + DEFINE_TEST_FUNC(u8, "%d"); DEFINE_TEST_FUNC(s8, "%d"); DEFINE_TEST_FUNC(u16, "%d"); @@ -265,6 +277,33 @@ DEFINE_TEST_FUNC(u64, "%llu"); DEFINE_TEST_FUNC(s64, "%lld"); #endif +DEFINE_TEST_ARRAY_TYPED(u32, u32, u8) = { + {0, 0, 0, 0, 0, false, false, false}, + {U8_MAX, 2, 1, U8_MAX - 2, U8_MAX - 1, true, false, true}, + {U8_MAX + 1, 0, 0, 0, 0, true, true, false}, +}; +DEFINE_TEST_FUNC_TYPED(u32_u32__u8, u8, "%d"); + +DEFINE_TEST_ARRAY_TYPED(u32, u32, int) = { + {0, 0, 0, 0, 0, false, false, false}, + {U32_MAX, 0, -1, -1, 0, true, true, false}, +}; +DEFINE_TEST_FUNC_TYPED(u32_u32__int, int, "%d"); + +DEFINE_TEST_ARRAY_TYPED(u8, u8, int) = { + {0, 0, 0, 0, 0, false, false, false}, + {U8_MAX, U8_MAX, 2 * U8_MAX, 0, U8_MAX * U8_MAX, false, false, false}, + {1, 2, 3, -1, 2, false, false, false}, +}; +DEFINE_TEST_FUNC_TYPED(u8_u8__int, int, "%d"); + +DEFINE_TEST_ARRAY_TYPED(int, int, u8) = { + {0, 0, 0, 0, 0, false, false, false}, + {1, 2, 3, U8_MAX, 2, false, true, false}, + {-1, 0, U8_MAX, U8_MAX, 0, true, true, false}, +}; +DEFINE_TEST_FUNC_TYPED(int_int__u8, u8, "%d"); + static void overflow_shift_test(struct kunit *test) { int count = 0; @@ -649,17 +688,21 @@ static void overflow_size_helpers_test(struct kunit *test) } static struct kunit_case overflow_test_cases[] = { - KUNIT_CASE(u8_overflow_test), - KUNIT_CASE(s8_overflow_test), - KUNIT_CASE(u16_overflow_test), - KUNIT_CASE(s16_overflow_test), - KUNIT_CASE(u32_overflow_test), - KUNIT_CASE(s32_overflow_test), + KUNIT_CASE(u8_u8__u8_overflow_test), + KUNIT_CASE(s8_s8__s8_overflow_test), + KUNIT_CASE(u16_u16__u16_overflow_test), + KUNIT_CASE(s16_s16__s16_overflow_test), + KUNIT_CASE(u32_u32__u32_overflow_test), + KUNIT_CASE(s32_s32__s32_overflow_test), /* Clang 13 and earlier generate unwanted libcalls on 32-bit. */ #if BITS_PER_LONG == 64 - KUNIT_CASE(u64_overflow_test), - KUNIT_CASE(s64_overflow_test), + KUNIT_CASE(u64_u64__u64_overflow_test), + KUNIT_CASE(s64_s64__s64_overflow_test), #endif + KUNIT_CASE(u32_u32__u8_overflow_test), + KUNIT_CASE(u32_u32__int_overflow_test), + KUNIT_CASE(u8_u8__int_overflow_test), + KUNIT_CASE(int_int__u8_overflow_test), KUNIT_CASE(overflow_shift_test), KUNIT_CASE(overflow_allocation_test), KUNIT_CASE(overflow_size_helpers_test), From patchwork Wed Sep 28 08:12:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12991759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC2ABC6FA90 for ; Wed, 28 Sep 2022 08:13:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233929AbiI1INq (ORCPT ); Wed, 28 Sep 2022 04:13:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40842 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233959AbiI1INo (ORCPT ); Wed, 28 Sep 2022 04:13:44 -0400 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39269CD66A; Wed, 28 Sep 2022 01:13:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664352823; x=1695888823; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=a15giGE9U8MfRV7QFZGLrBfnIa+lhGBT+wMMe9HhDqQ=; b=DM9KW3MwKnzA9v8zOrtpRZVEanPdJBArB4gRnUbmeWjuRNBL+P1hz13a +jJh+1NNoHMv2RGSA/TwZ3BRJi3GPTqtPmuPnUi7ZyaOG/wiKdtHVncif g7gGEWjYS2x02Dd3hwG//Sg9h9SZ6ywm0p1/k3PyibSWVBA+Jb623jcad ezVM5plkD2m3XA8w/UJTK7VWGqP3u/8btNYlStdTmGWIVZAkhIYbS7bE7 Ba7g06Ty7c04ayQH0BtIkBhEPeoqRfYSBBjR5LEp0a3s4IDB/ll27dSeV ZnkUe2FyXt7dDCTkHzdbAj25dcK+hXg25DgZmkKxKI7soJTtuPODujRj8 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="284671713" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="284671713" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:13:42 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="621836209" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="621836209" Received: from maciejos-mobl.ger.corp.intel.com (HELO paris.ger.corp.intel.com) ([10.249.147.47]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:13:32 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, mchehab@kernel.org, chris@chris-wilson.co.uk, matthew.auld@intel.com, thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, nirmoy.das@intel.com, airlied@redhat.com, daniel@ffwll.ch, andi.shyti@linux.intel.com, andrzej.hajda@intel.com, keescook@chromium.org, mauro.chehab@linux.intel.com, linux@rasmusvillemoes.dk, vitor@massaru.org, dlatypov@google.com, ndesaulniers@google.com, trix@redhat.com, llvm@lists.linux.dev, linux-hardening@vger.kernel.org, linux-sparse@vger.kernel.org, nathan@kernel.org, gustavoars@kernel.org, luc.vanoostenryck@gmail.com Subject: [PATCH v13 2/9] overflow: Introduce check_assign() and check_assign_user_ptr() Date: Wed, 28 Sep 2022 11:12:53 +0300 Message-Id: <20220928081300.101516-3-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> References: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org Add check_assign() macro which performs an assigning source value into destination pointer along with an overflow check and check_assign_user_ptr() macro which performs an assigning source value into destination pointer type variable along with an overflow check. If an explicit overflow check is required while assigning to a user-space ptr, assign_user_ptr() can be used instead of u64_to_user_ptr() to assign integers into __user pointers along with an overflow check. v3: Add is_type_unsigned() macro (Mauro) Modify overflows_type() macro to consider signed data types (Mauro) Fix the problem that safe_conversion() macro always returns true v4: Fix kernel-doc markups v6: Move macro addition location so that it can be used by other than drm subsystem (Jani, Mauro, Andi) Change is_type_unsigned to is_unsigned_type to have the same name form as is_signed_type macro v8: Add check_assign() and remove safe_conversion() (Kees) Fix overflows_type() to use gcc's built-in overflow function (Andrzej) Add overflows_ptr() to allow overflow checking when assigning a value into a pointer variable (G.G.) v9: Fix overflows_type() to use __builtin_add_overflow() instead of __builtin_add_overflow_p() (Andrzej) Fix overflows_ptr() to use overflows_type() with the unsigned long type (Andrzej) v10: Remove a redundant type checking for a pointer. (Andrzej) Use updated check_add_overflow macro instead of __builtin_add_overflow (G.G) Add check_assign_user_ptr() macro and drop overflows_ptr() macro(Kees) v11: Fix incorrect type assignment between different address spaces caused by the wrong use of __user macro. (kernel test robot) Update macro description (G.G) v12: Remove overflows_type() macro here. updated overflows_type() macro will be added in a subsequent patch (G.G) Signed-off-by: Gwan-gyeong Mun Cc: Thomas Hellström Cc: Matthew Auld Cc: Nirmoy Das Cc: Jani Nikula Cc: Andi Shyti Cc: Andrzej Hajda Cc: Mauro Carvalho Chehab Cc: Kees Cook Reviewed-by: Mauro Carvalho Chehab (v5) Reviewed-by: Andrzej Hajda (v9) Acked-by: Kees Cook Reported-by: kernel test robot --- drivers/gpu/drm/i915/i915_user_extensions.c | 6 +-- include/linux/overflow.h | 44 +++++++++++++++++++++ 2 files changed, 47 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_user_extensions.c b/drivers/gpu/drm/i915/i915_user_extensions.c index c822d0aafd2d..80ec8390b0d8 100644 --- a/drivers/gpu/drm/i915/i915_user_extensions.c +++ b/drivers/gpu/drm/i915/i915_user_extensions.c @@ -50,11 +50,11 @@ int i915_user_extensions(struct i915_user_extension __user *ext, if (err) return err; - if (get_user(next, &ext->next_extension) || - overflows_type(next, ext)) + if (get_user(next, &ext->next_extension)) return -EFAULT; - ext = u64_to_user_ptr(next); + if (check_assign_user_ptr(next, ext)) + return -EFAULT; } return 0; diff --git a/include/linux/overflow.h b/include/linux/overflow.h index 19dfdd74835e..8ccbfa46f0ed 100644 --- a/include/linux/overflow.h +++ b/include/linux/overflow.h @@ -5,6 +5,7 @@ #include #include #include +#include /* * We need to compute the minimum and maximum values representable in a given @@ -127,6 +128,49 @@ static inline bool __must_check __must_check_overflow(bool overflow) (*_d >> _to_shift) != _a); \ })) +/** + * check_assign - perform an assigning source value into destination pointer + * along with an overflow check. + * + * @value: source value + * @ptr: Destination pointer address + * + * Returns: + * If the value would overflow the destination, it returns true. If not return + * false. When overflow does not occur, the assigning into destination from + * value succeeds. It follows the return policy as other check_*_overflow() + * functions return non-zero as a failure. + */ +#define check_assign(value, ptr) __must_check_overflow(({ \ + check_add_overflow(0, value, ptr); \ +})) + +/** + * check_assign_user_ptr - perform an assigning source value into destination + * pointer type variable along with an overflow check + * + * @value: source value; a source value is expected to have a value of a size + * that can be stored in a pointer-type variable. + * @ptr: destination pointer type variable + * + * u64_to_user_ptr can be used in the kernel to avoid warnings about integers + * and pointers of different sizes. But u64_to_user_ptr is not performing the + * checking of overflow. If you need an explicit overflow check while + * assigning, check_assign_user_ptr() can be used to assign integers into + * pointers along with an overflow check. If ptr is not a pointer type, + * a warning message outputs while compiling. + * + * Returns: + * If the value would overflow the destination, it returns true. If not return + * false. When overflow does not occur, the assigning into ptr from value + * succeeds. It follows the return policy as other check_*_overflow() functions + * return non-zero as a failure. + */ +#define check_assign_user_ptr(value, ptr) __must_check_overflow(({ \ + uintptr_t kptr; \ + check_assign(value, &kptr) ? 1 : (({ ptr = (void __user *)kptr; }), 0); \ +})) + /** * size_mul() - Calculate size_t multiplication with saturation at SIZE_MAX * From patchwork Wed Sep 28 08:12:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12991760 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CCCFC04A95 for ; Wed, 28 Sep 2022 08:14:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234012AbiI1IOP (ORCPT ); Wed, 28 Sep 2022 04:14:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234009AbiI1IN7 (ORCPT ); Wed, 28 Sep 2022 04:13:59 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E60DD1CEDDE; Wed, 28 Sep 2022 01:13:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664352832; x=1695888832; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=O29Q0y/t1hOnniP0PtkuxPZOWVAIvdkg9QxtoFms95E=; b=ngjSBz7tMSKWBhgiFZzH96ThkHFhbkImAeJwSf13LLsLSpRZeKzpA5k4 q5Gcp0IFEgqc5pfFqlmnk2NChw+pbeQbEDq2nOB8tKtmTKe2+kjyATLom s+V0DhXC8N5Y/jc/V6JgjyjGTT7/DSOUN/x7Fs4nTTemhJMz0o6q+/GZ2 UXuHFKHo7h2zLTo7kjWRs4KphijfLaFwGh1E5qkH9t2btTFTGwR4hTQL0 jFwu0C8wqA2NqIm9xUIsR8RxHKnTq/Vfm4hA4b8pgna6wEDzNonfnsl4u ahvhAJybu06qnEJR+jjT+V+L6QSpIdLc4fHJnL8cVN/TPdOCFxf9X6OG8 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="281910225" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="281910225" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:13:51 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="621836234" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="621836234" Received: from maciejos-mobl.ger.corp.intel.com (HELO paris.ger.corp.intel.com) ([10.249.147.47]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:13:42 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, mchehab@kernel.org, chris@chris-wilson.co.uk, matthew.auld@intel.com, thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, nirmoy.das@intel.com, airlied@redhat.com, daniel@ffwll.ch, andi.shyti@linux.intel.com, andrzej.hajda@intel.com, keescook@chromium.org, mauro.chehab@linux.intel.com, linux@rasmusvillemoes.dk, vitor@massaru.org, dlatypov@google.com, ndesaulniers@google.com, trix@redhat.com, llvm@lists.linux.dev, linux-hardening@vger.kernel.org, linux-sparse@vger.kernel.org, nathan@kernel.org, gustavoars@kernel.org, luc.vanoostenryck@gmail.com Subject: [PATCH v13 3/9] overflow: Introduce overflows_type() and castable_to_type() Date: Wed, 28 Sep 2022 11:12:54 +0300 Message-Id: <20220928081300.101516-4-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> References: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Kees Cook Implement a robust overflows_type() macro to test if a variable or constant value would overflow another variable or type. This can be used as a constant expression for static_assert() (which requires a constant expression[1][2]) when used on constant values. This must be constructed manually, since __builtin_add_overflow() does not produce a constant expression[3]. Additionally adds castable_to_type(), similar to __same_type(), but for checking if a constant value would overflow if cast to a given type. Add unit tests for overflows_type(), __same_type(), and castable_to_type() to the existing KUnit "overflow" test. [1] https://en.cppreference.com/w/c/language/_Static_assert [2] C11 standard (ISO/IEC 9899:2011): 6.7.10 Static assertions [3] https://gcc.gnu.org/onlinedocs/gcc/Integer-Overflow-Builtins.html 6.56 Built-in Functions to Perform Arithmetic with Overflow Checking Built-in Function: bool __builtin_add_overflow (type1 a, type2 b, Cc: Luc Van Oostenryck Cc: Nathan Chancellor Cc: Nick Desaulniers Cc: Tom Rix Cc: Daniel Latypov Cc: Vitor Massaru Iha Cc: "Gustavo A. R. Silva" Cc: linux-hardening@vger.kernel.org Cc: llvm@lists.linux.dev Co-developed-by: Gwan-gyeong Mun Signed-off-by: Gwan-gyeong Mun Signed-off-by: Kees Cook --- drivers/gpu/drm/i915/i915_utils.h | 4 - include/linux/compiler.h | 1 + include/linux/overflow.h | 48 ++++ lib/overflow_kunit.c | 388 +++++++++++++++++++++++++++++- 4 files changed, 436 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h index 6c14d13364bf..67a66d4d5c70 100644 --- a/drivers/gpu/drm/i915/i915_utils.h +++ b/drivers/gpu/drm/i915/i915_utils.h @@ -111,10 +111,6 @@ bool i915_error_injected(void); #define range_overflows_end_t(type, start, size, max) \ range_overflows_end((type)(start), (type)(size), (type)(max)) -/* Note we don't consider signbits :| */ -#define overflows_type(x, T) \ - (sizeof(x) > sizeof(T) && (x) >> BITS_PER_TYPE(T)) - #define ptr_mask_bits(ptr, n) ({ \ unsigned long __v = (unsigned long)(ptr); \ (typeof(ptr))(__v & -BIT(n)); \ diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 7713d7bcdaea..c631107e93b1 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -244,6 +244,7 @@ static inline void *offset_to_ptr(const int *off) * bool and also pointer types. */ #define is_signed_type(type) (((type)(-1)) < (__force type)1) +#define is_unsigned_type(type) (!is_signed_type(type)) /* * This is needed in functions which generate the stack canary, see diff --git a/include/linux/overflow.h b/include/linux/overflow.h index 8ccbfa46f0ed..f63cefeabcba 100644 --- a/include/linux/overflow.h +++ b/include/linux/overflow.h @@ -171,6 +171,54 @@ static inline bool __must_check __must_check_overflow(bool overflow) check_assign(value, &kptr) ? 1 : (({ ptr = (void __user *)kptr; }), 0); \ })) +#define __overflows_type_constexpr(x, T) ( \ + is_unsigned_type(typeof(x)) ? \ + (x) > type_max(typeof(T)) ? 1 : 0 \ + : is_unsigned_type(typeof(T)) ? \ + (x) < 0 || (x) > type_max(typeof(T)) ? 1 : 0 \ + : (x) < type_min(typeof(T)) || \ + (x) > type_max(typeof(T)) ? 1 : 0) + +#define __overflows_type(x, T) ({ \ + typeof(T) v = 0; \ + check_add_overflow((x), v, &v); \ +}) + +/** + * overflows_type - helper for checking the overflows between value, variables, + * or data type + * + * @n: source constant value or variable to be checked + * @T: destination variable or data type proposed to store @x + * + * Compares the @x expression for whether or not it can safely fit in + * the storage of the type in @T. @x and @T can have different types. + * If @x is a constant expression, this will also resolve to a constant + * expression. + * + * Returns: true if overflow can occur, false otherwise. + */ +#define overflows_type(n, T) \ + __builtin_choose_expr(__is_constexpr(n), \ + __overflows_type_constexpr(n, T), \ + __overflows_type(n, T)) + +/** + * castable_to_type - like __same_type(), but also allows for casted literals + * + * @n: variable or constant value + * @T: variable or data type + * + * Unlike the __same_type() macro, this allows a constant value as the + * first argument. If this value would not overflow into an assignment + * of the second argument's type, it returns true. Otherwise, this falls + * back to __same_type(). + */ +#define castable_to_type(n, T) \ + __builtin_choose_expr(__is_constexpr(n), \ + !__overflows_type_constexpr(n, T), \ + __same_type(n, T)) + /** * size_mul() - Calculate size_t multiplication with saturation at SIZE_MAX * diff --git a/lib/overflow_kunit.c b/lib/overflow_kunit.c index 0d98c9bc75da..44da9d190057 100644 --- a/lib/overflow_kunit.c +++ b/lib/overflow_kunit.c @@ -16,6 +16,11 @@ #include #include +/* We're expecting to do a lot of "always true" or "always false" tests. */ +#ifdef CONFIG_CC_IS_CLANG +#pragma clang diagnostic ignored "-Wtautological-constant-out-of-range-compare" +#endif + #define DEFINE_TEST_ARRAY_TYPED(t1, t2, t) \ static const struct test_ ## t1 ## _ ## t2 ## __ ## t { \ t1 a; \ @@ -246,7 +251,7 @@ DEFINE_TEST_ARRAY(s64) = { #define DEFINE_TEST_FUNC_TYPED(n, t, fmt) \ static void do_test_ ## n(struct kunit *test, const struct test_ ## n *p) \ -{ \ +{ \ check_one_op(t, fmt, add, "+", p->a, p->b, p->sum, p->s_of); \ check_one_op(t, fmt, add, "+", p->b, p->a, p->sum, p->s_of); \ check_one_op(t, fmt, sub, "-", p->a, p->b, p->diff, p->d_of); \ @@ -687,6 +692,384 @@ static void overflow_size_helpers_test(struct kunit *test) #undef check_one_size_helper } +static void overflows_type_test(struct kunit *test) +{ + int count = 0; + unsigned int var; + +#define __TEST_OVERFLOWS_TYPE(func, arg1, arg2, of) do { \ + bool __of = func(arg1, arg2); \ + KUNIT_EXPECT_EQ_MSG(test, __of, of, \ + "expected " #func "(" #arg1 ", " #arg2 " to%s overflow\n",\ + of ? "" : " not"); \ + count++; \ +} while (0) + +/* Args are: first type, second type, value, overflow expected */ +#define TEST_OVERFLOWS_TYPE(__t1, __t2, v, of) do { \ + __t1 t1 = (v); \ + __t2 t2; \ + __TEST_OVERFLOWS_TYPE(__overflows_type, t1, t2, of); \ + __TEST_OVERFLOWS_TYPE(__overflows_type, t1, __t2, of); \ + __TEST_OVERFLOWS_TYPE(__overflows_type_constexpr, t1, t2, of); \ + __TEST_OVERFLOWS_TYPE(__overflows_type_constexpr, t1, __t2, of);\ +} while (0) + + TEST_OVERFLOWS_TYPE(u8, u8, U8_MAX, false); + TEST_OVERFLOWS_TYPE(u8, u16, U8_MAX, false); + TEST_OVERFLOWS_TYPE(u8, s8, U8_MAX, true); + TEST_OVERFLOWS_TYPE(u8, s8, S8_MAX, false); + TEST_OVERFLOWS_TYPE(u8, s8, (u8)S8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u8, s16, U8_MAX, false); + TEST_OVERFLOWS_TYPE(s8, u8, S8_MAX, false); + TEST_OVERFLOWS_TYPE(s8, u8, -1, true); + TEST_OVERFLOWS_TYPE(s8, u8, S8_MIN, true); + TEST_OVERFLOWS_TYPE(s8, u16, S8_MAX, false); + TEST_OVERFLOWS_TYPE(s8, u16, -1, true); + TEST_OVERFLOWS_TYPE(s8, u16, S8_MIN, true); + TEST_OVERFLOWS_TYPE(s8, u32, S8_MAX, false); + TEST_OVERFLOWS_TYPE(s8, u32, -1, true); + TEST_OVERFLOWS_TYPE(s8, u32, S8_MIN, true); +#if BITS_PER_LONG == 64 + TEST_OVERFLOWS_TYPE(s8, u64, S8_MAX, false); + TEST_OVERFLOWS_TYPE(s8, u64, -1, true); + TEST_OVERFLOWS_TYPE(s8, u64, S8_MIN, true); +#endif + TEST_OVERFLOWS_TYPE(s8, s8, S8_MAX, false); + TEST_OVERFLOWS_TYPE(s8, s8, S8_MIN, false); + TEST_OVERFLOWS_TYPE(s8, s16, S8_MAX, false); + TEST_OVERFLOWS_TYPE(s8, s16, S8_MIN, false); + TEST_OVERFLOWS_TYPE(u16, u8, U8_MAX, false); + TEST_OVERFLOWS_TYPE(u16, u8, (u16)U8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u16, u8, U16_MAX, true); + TEST_OVERFLOWS_TYPE(u16, s8, S8_MAX, false); + TEST_OVERFLOWS_TYPE(u16, s8, (u16)S8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u16, s8, U16_MAX, true); + TEST_OVERFLOWS_TYPE(u16, s16, S16_MAX, false); + TEST_OVERFLOWS_TYPE(u16, s16, (u16)S16_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u16, s16, U16_MAX, true); + TEST_OVERFLOWS_TYPE(u16, u32, U16_MAX, false); + TEST_OVERFLOWS_TYPE(u16, s32, U16_MAX, false); + TEST_OVERFLOWS_TYPE(s16, u8, U8_MAX, false); + TEST_OVERFLOWS_TYPE(s16, u8, (s16)U8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s16, u8, -1, true); + TEST_OVERFLOWS_TYPE(s16, u8, S16_MIN, true); + TEST_OVERFLOWS_TYPE(s16, u16, S16_MAX, false); + TEST_OVERFLOWS_TYPE(s16, u16, -1, true); + TEST_OVERFLOWS_TYPE(s16, u16, S16_MIN, true); + TEST_OVERFLOWS_TYPE(s16, u32, S16_MAX, false); + TEST_OVERFLOWS_TYPE(s16, u32, -1, true); + TEST_OVERFLOWS_TYPE(s16, u32, S16_MIN, true); +#if BITS_PER_LONG == 64 + TEST_OVERFLOWS_TYPE(s16, u64, S16_MAX, false); + TEST_OVERFLOWS_TYPE(s16, u64, -1, true); + TEST_OVERFLOWS_TYPE(s16, u64, S16_MIN, true); +#endif + TEST_OVERFLOWS_TYPE(s16, s8, S8_MAX, false); + TEST_OVERFLOWS_TYPE(s16, s8, S8_MIN, false); + TEST_OVERFLOWS_TYPE(s16, s8, (s16)S8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s16, s8, (s16)S8_MIN - 1, true); + TEST_OVERFLOWS_TYPE(s16, s8, S16_MAX, true); + TEST_OVERFLOWS_TYPE(s16, s8, S16_MIN, true); + TEST_OVERFLOWS_TYPE(s16, s16, S16_MAX, false); + TEST_OVERFLOWS_TYPE(s16, s16, S16_MIN, false); + TEST_OVERFLOWS_TYPE(s16, s32, S16_MAX, false); + TEST_OVERFLOWS_TYPE(s16, s32, S16_MIN, false); + TEST_OVERFLOWS_TYPE(u32, u8, U8_MAX, false); + TEST_OVERFLOWS_TYPE(u32, u8, (u32)U8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u32, u8, U32_MAX, true); + TEST_OVERFLOWS_TYPE(u32, s8, S8_MAX, false); + TEST_OVERFLOWS_TYPE(u32, s8, (u32)S8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u32, s8, U32_MAX, true); + TEST_OVERFLOWS_TYPE(u32, u16, U16_MAX, false); + TEST_OVERFLOWS_TYPE(u32, u16, U16_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u32, u16, U32_MAX, true); + TEST_OVERFLOWS_TYPE(u32, s16, S16_MAX, false); + TEST_OVERFLOWS_TYPE(u32, s16, (u32)S16_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u32, s16, U32_MAX, true); + TEST_OVERFLOWS_TYPE(u32, u32, U32_MAX, false); + TEST_OVERFLOWS_TYPE(u32, s32, S32_MAX, false); + TEST_OVERFLOWS_TYPE(u32, s32, U32_MAX, true); + TEST_OVERFLOWS_TYPE(u32, s32, (u32)S32_MAX + 1, true); +#if BITS_PER_LONG == 64 + TEST_OVERFLOWS_TYPE(u32, u64, U32_MAX, false); + TEST_OVERFLOWS_TYPE(u32, s64, U32_MAX, false); +#endif + TEST_OVERFLOWS_TYPE(s32, u8, U8_MAX, false); + TEST_OVERFLOWS_TYPE(s32, u8, (s32)U8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s32, u16, S32_MAX, true); + TEST_OVERFLOWS_TYPE(s32, u8, -1, true); + TEST_OVERFLOWS_TYPE(s32, u8, S32_MIN, true); + TEST_OVERFLOWS_TYPE(s32, u16, U16_MAX, false); + TEST_OVERFLOWS_TYPE(s32, u16, (s32)U16_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s32, u16, S32_MAX, true); + TEST_OVERFLOWS_TYPE(s32, u16, -1, true); + TEST_OVERFLOWS_TYPE(s32, u16, S32_MIN, true); + TEST_OVERFLOWS_TYPE(s32, u32, S32_MAX, false); + TEST_OVERFLOWS_TYPE(s32, u32, -1, true); + TEST_OVERFLOWS_TYPE(s32, u32, S32_MIN, true); +#if BITS_PER_LONG == 64 + TEST_OVERFLOWS_TYPE(s32, u64, S32_MAX, false); + TEST_OVERFLOWS_TYPE(s32, u64, -1, true); + TEST_OVERFLOWS_TYPE(s32, u64, S32_MIN, true); +#endif + TEST_OVERFLOWS_TYPE(s32, s8, S8_MAX, false); + TEST_OVERFLOWS_TYPE(s32, s8, S8_MIN, false); + TEST_OVERFLOWS_TYPE(s32, s8, (s32)S8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s32, s8, (s32)S8_MIN - 1, true); + TEST_OVERFLOWS_TYPE(s32, s8, S32_MAX, true); + TEST_OVERFLOWS_TYPE(s32, s8, S32_MIN, true); + TEST_OVERFLOWS_TYPE(s32, s16, S16_MAX, false); + TEST_OVERFLOWS_TYPE(s32, s16, S16_MIN, false); + TEST_OVERFLOWS_TYPE(s32, s16, (s32)S16_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s32, s16, (s32)S16_MIN - 1, true); + TEST_OVERFLOWS_TYPE(s32, s16, S32_MAX, true); + TEST_OVERFLOWS_TYPE(s32, s16, S32_MIN, true); + TEST_OVERFLOWS_TYPE(s32, s32, S32_MAX, false); + TEST_OVERFLOWS_TYPE(s32, s32, S32_MIN, false); +#if BITS_PER_LONG == 64 + TEST_OVERFLOWS_TYPE(s32, s64, S32_MAX, false); + TEST_OVERFLOWS_TYPE(s32, s64, S32_MIN, false); + TEST_OVERFLOWS_TYPE(u64, u8, U64_MAX, true); + TEST_OVERFLOWS_TYPE(u64, u8, U8_MAX, false); + TEST_OVERFLOWS_TYPE(u64, u8, (u64)U8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u64, u16, U64_MAX, true); + TEST_OVERFLOWS_TYPE(u64, u16, U16_MAX, false); + TEST_OVERFLOWS_TYPE(u64, u16, (u64)U16_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u64, u32, U64_MAX, true); + TEST_OVERFLOWS_TYPE(u64, u32, U32_MAX, false); + TEST_OVERFLOWS_TYPE(u64, u32, (u64)U32_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u64, u64, U64_MAX, false); + TEST_OVERFLOWS_TYPE(u64, s8, S8_MAX, false); + TEST_OVERFLOWS_TYPE(u64, s8, (u64)S8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u64, s8, U64_MAX, true); + TEST_OVERFLOWS_TYPE(u64, s16, S16_MAX, false); + TEST_OVERFLOWS_TYPE(u64, s16, (u64)S16_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u64, s16, U64_MAX, true); + TEST_OVERFLOWS_TYPE(u64, s32, S32_MAX, false); + TEST_OVERFLOWS_TYPE(u64, s32, (u64)S32_MAX + 1, true); + TEST_OVERFLOWS_TYPE(u64, s32, U64_MAX, true); + TEST_OVERFLOWS_TYPE(u64, s64, S64_MAX, false); + TEST_OVERFLOWS_TYPE(u64, s64, U64_MAX, true); + TEST_OVERFLOWS_TYPE(u64, s64, (u64)S64_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s64, u8, S64_MAX, true); + TEST_OVERFLOWS_TYPE(s64, u8, S64_MIN, true); + TEST_OVERFLOWS_TYPE(s64, u8, -1, true); + TEST_OVERFLOWS_TYPE(s64, u8, U8_MAX, false); + TEST_OVERFLOWS_TYPE(s64, u8, (s64)U8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s64, u16, S64_MAX, true); + TEST_OVERFLOWS_TYPE(s64, u16, S64_MIN, true); + TEST_OVERFLOWS_TYPE(s64, u16, -1, true); + TEST_OVERFLOWS_TYPE(s64, u16, U16_MAX, false); + TEST_OVERFLOWS_TYPE(s64, u16, (s64)U16_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s64, u32, S64_MAX, true); + TEST_OVERFLOWS_TYPE(s64, u32, S64_MIN, true); + TEST_OVERFLOWS_TYPE(s64, u32, -1, true); + TEST_OVERFLOWS_TYPE(s64, u32, U32_MAX, false); + TEST_OVERFLOWS_TYPE(s64, u32, (s64)U32_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s64, u64, S64_MAX, false); + TEST_OVERFLOWS_TYPE(s64, u64, S64_MIN, true); + TEST_OVERFLOWS_TYPE(s64, u64, -1, true); + TEST_OVERFLOWS_TYPE(s64, s8, S8_MAX, false); + TEST_OVERFLOWS_TYPE(s64, s8, S8_MIN, false); + TEST_OVERFLOWS_TYPE(s64, s8, (s64)S8_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s64, s8, (s64)S8_MIN - 1, true); + TEST_OVERFLOWS_TYPE(s64, s8, S64_MAX, true); + TEST_OVERFLOWS_TYPE(s64, s16, S16_MAX, false); + TEST_OVERFLOWS_TYPE(s64, s16, S16_MIN, false); + TEST_OVERFLOWS_TYPE(s64, s16, (s64)S16_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s64, s16, (s64)S16_MIN - 1, true); + TEST_OVERFLOWS_TYPE(s64, s16, S64_MAX, true); + TEST_OVERFLOWS_TYPE(s64, s32, S32_MAX, false); + TEST_OVERFLOWS_TYPE(s64, s32, S32_MIN, false); + TEST_OVERFLOWS_TYPE(s64, s32, (s64)S32_MAX + 1, true); + TEST_OVERFLOWS_TYPE(s64, s32, (s64)S32_MIN - 1, true); + TEST_OVERFLOWS_TYPE(s64, s32, S64_MAX, true); + TEST_OVERFLOWS_TYPE(s64, s64, S64_MAX, false); + TEST_OVERFLOWS_TYPE(s64, s64, S64_MIN, false); +#endif + + /* Check for macro side-effects. */ + var = INT_MAX - 1; + __TEST_OVERFLOWS_TYPE(__overflows_type, var++, int, false); + __TEST_OVERFLOWS_TYPE(__overflows_type, var++, int, false); + __TEST_OVERFLOWS_TYPE(__overflows_type, var++, int, true); + var = INT_MAX - 1; + __TEST_OVERFLOWS_TYPE(overflows_type, var++, int, false); + __TEST_OVERFLOWS_TYPE(overflows_type, var++, int, false); + __TEST_OVERFLOWS_TYPE(overflows_type, var++, int, true); + + kunit_info(test, "%d overflows_type() tests finished\n", count); +#undef TEST_OVERFLOWS_TYPE +#undef __TEST_OVERFLOWS_TYPE +} + +static void same_type_test(struct kunit *test) +{ + int count = 0; + int var; + +#define TEST_SAME_TYPE(t1, t2, same) do { \ + typeof(t1) __t1h = type_max(t1); \ + typeof(t1) __t1l = type_min(t1); \ + typeof(t2) __t2h = type_max(t2); \ + typeof(t2) __t2l = type_min(t2); \ + KUNIT_EXPECT_EQ(test, true, __same_type(t1, __t1h)); \ + KUNIT_EXPECT_EQ(test, true, __same_type(t1, __t1l)); \ + KUNIT_EXPECT_EQ(test, true, __same_type(__t1h, t1)); \ + KUNIT_EXPECT_EQ(test, true, __same_type(__t1l, t1)); \ + KUNIT_EXPECT_EQ(test, true, __same_type(t2, __t2h)); \ + KUNIT_EXPECT_EQ(test, true, __same_type(t2, __t2l)); \ + KUNIT_EXPECT_EQ(test, true, __same_type(__t2h, t2)); \ + KUNIT_EXPECT_EQ(test, true, __same_type(__t2l, t2)); \ + KUNIT_EXPECT_EQ(test, same, __same_type(t1, t2)); \ + KUNIT_EXPECT_EQ(test, same, __same_type(t2, __t1h)); \ + KUNIT_EXPECT_EQ(test, same, __same_type(t2, __t1l)); \ + KUNIT_EXPECT_EQ(test, same, __same_type(__t1h, t2)); \ + KUNIT_EXPECT_EQ(test, same, __same_type(__t1l, t2)); \ + KUNIT_EXPECT_EQ(test, same, __same_type(t1, __t2h)); \ + KUNIT_EXPECT_EQ(test, same, __same_type(t1, __t2l)); \ + KUNIT_EXPECT_EQ(test, same, __same_type(__t2h, t1)); \ + KUNIT_EXPECT_EQ(test, same, __same_type(__t2l, t1)); \ +} while (0) + +#if BITS_PER_LONG == 64 +# define TEST_SAME_TYPE64(base, t, m) TEST_SAME_TYPE(base, t, m) +#else +# define TEST_SAME_TYPE64(base, t, m) do { } while (0) +#endif + +#define TEST_TYPE_SETS(base, mu8, mu16, mu32, ms8, ms16, ms32, mu64, ms64) \ +do { \ + TEST_SAME_TYPE(base, u8, mu8); \ + TEST_SAME_TYPE(base, u16, mu16); \ + TEST_SAME_TYPE(base, u32, mu32); \ + TEST_SAME_TYPE(base, s8, ms8); \ + TEST_SAME_TYPE(base, s16, ms16); \ + TEST_SAME_TYPE(base, s32, ms32); \ + TEST_SAME_TYPE64(base, u64, mu64); \ + TEST_SAME_TYPE64(base, s64, ms64); \ +} while (0) + + TEST_TYPE_SETS(u8, true, false, false, false, false, false, false, false); + TEST_TYPE_SETS(u16, false, true, false, false, false, false, false, false); + TEST_TYPE_SETS(u32, false, false, true, false, false, false, false, false); + TEST_TYPE_SETS(s8, false, false, false, true, false, false, false, false); + TEST_TYPE_SETS(s16, false, false, false, false, true, false, false, false); + TEST_TYPE_SETS(s32, false, false, false, false, false, true, false, false); +#if BITS_PER_LONG == 64 + TEST_TYPE_SETS(u64, false, false, false, false, false, false, true, false); + TEST_TYPE_SETS(s64, false, false, false, false, false, false, false, true); +#endif + + /* Check for macro side-effects. */ + var = 4; + KUNIT_EXPECT_EQ(test, var, 4); + KUNIT_EXPECT_TRUE(test, __same_type(var++, int)); + KUNIT_EXPECT_EQ(test, var, 4); + KUNIT_EXPECT_TRUE(test, __same_type(int, var++)); + KUNIT_EXPECT_EQ(test, var, 4); + KUNIT_EXPECT_TRUE(test, __same_type(var++, var++)); + KUNIT_EXPECT_EQ(test, var, 4); + + kunit_info(test, "%d __same_type() tests finished\n", count); + +#undef TEST_TYPE_SETS +#undef TEST_SAME_TYPE64 +#undef TEST_SAME_TYPE +} + +static void castable_to_type_test(struct kunit *test) +{ + int count = 0; + +#define TEST_CASTABLE_TO_TYPE(arg1, arg2, pass) do { \ + bool __pass = castable_to_type(arg1, arg2); \ + KUNIT_EXPECT_EQ_MSG(test, __pass, pass, \ + "expected castable_to_type(" #arg1 ", " #arg2 ") to%s pass\n",\ + pass ? "" : " not"); \ + count++; \ +} while (0) + + TEST_CASTABLE_TO_TYPE(16, u8, true); + TEST_CASTABLE_TO_TYPE(16, u16, true); + TEST_CASTABLE_TO_TYPE(16, u32, true); + TEST_CASTABLE_TO_TYPE(16, s8, true); + TEST_CASTABLE_TO_TYPE(16, s16, true); + TEST_CASTABLE_TO_TYPE(16, s32, true); + TEST_CASTABLE_TO_TYPE(-16, s8, true); + TEST_CASTABLE_TO_TYPE(-16, s16, true); + TEST_CASTABLE_TO_TYPE(-16, s32, true); +#if BITS_PER_LONG == 64 + TEST_CASTABLE_TO_TYPE(16, u64, true); + TEST_CASTABLE_TO_TYPE(-16, s64, true); +#endif + +#define TEST_CASTABLE_TO_TYPE_VAR(width) do { \ + u ## width u ## width ## var = 0; \ + s ## width s ## width ## var = 0; \ + \ + /* Constant expressions that fit types. */ \ + TEST_CASTABLE_TO_TYPE(type_max(u ## width), u ## width, true); \ + TEST_CASTABLE_TO_TYPE(type_min(u ## width), u ## width, true); \ + TEST_CASTABLE_TO_TYPE(type_max(u ## width), u ## width ## var, true); \ + TEST_CASTABLE_TO_TYPE(type_min(u ## width), u ## width ## var, true); \ + TEST_CASTABLE_TO_TYPE(type_max(s ## width), s ## width, true); \ + TEST_CASTABLE_TO_TYPE(type_min(s ## width), s ## width, true); \ + TEST_CASTABLE_TO_TYPE(type_max(s ## width), s ## width ## var, true); \ + TEST_CASTABLE_TO_TYPE(type_min(u ## width), s ## width ## var, true); \ + /* Constant expressions that do not fit types. */ \ + TEST_CASTABLE_TO_TYPE(type_max(u ## width), s ## width, false); \ + TEST_CASTABLE_TO_TYPE(type_max(u ## width), s ## width ## var, false); \ + TEST_CASTABLE_TO_TYPE(type_min(s ## width), u ## width, false); \ + TEST_CASTABLE_TO_TYPE(type_min(s ## width), u ## width ## var, false); \ + /* Non-constant expression with mismatched type. */ \ + TEST_CASTABLE_TO_TYPE(s ## width ## var, u ## width, false); \ + TEST_CASTABLE_TO_TYPE(u ## width ## var, s ## width, false); \ +} while (0) + +#define TEST_CASTABLE_TO_TYPE_RANGE(width) do { \ + unsigned long big = U ## width ## _MAX; \ + signed long small = S ## width ## _MIN; \ + u ## width u ## width ## var = 0; \ + s ## width s ## width ## var = 0; \ + \ + /* Constant expression in range. */ \ + TEST_CASTABLE_TO_TYPE(U ## width ## _MAX, u ## width, true); \ + TEST_CASTABLE_TO_TYPE(U ## width ## _MAX, u ## width ## var, true); \ + TEST_CASTABLE_TO_TYPE(S ## width ## _MIN, s ## width, true); \ + TEST_CASTABLE_TO_TYPE(S ## width ## _MIN, s ## width ## var, true); \ + /* Constant expression out of range. */ \ + TEST_CASTABLE_TO_TYPE((unsigned long)U ## width ## _MAX + 1, u ## width, false); \ + TEST_CASTABLE_TO_TYPE((unsigned long)U ## width ## _MAX + 1, u ## width ## var, false); \ + TEST_CASTABLE_TO_TYPE((signed long)S ## width ## _MIN - 1, s ## width, false); \ + TEST_CASTABLE_TO_TYPE((signed long)S ## width ## _MIN - 1, s ## width ## var, false); \ + /* Non-constant expression with mismatched type. */ \ + TEST_CASTABLE_TO_TYPE(big, u ## width, false); \ + TEST_CASTABLE_TO_TYPE(big, u ## width ## var, false); \ + TEST_CASTABLE_TO_TYPE(small, s ## width, false); \ + TEST_CASTABLE_TO_TYPE(small, s ## width ## var, false); \ +} while (0) + + TEST_CASTABLE_TO_TYPE_VAR(8); + TEST_CASTABLE_TO_TYPE_VAR(16); + TEST_CASTABLE_TO_TYPE_VAR(32); +#if BITS_PER_LONG == 64 + TEST_CASTABLE_TO_TYPE_VAR(64); +#endif + + TEST_CASTABLE_TO_TYPE_RANGE(8); + TEST_CASTABLE_TO_TYPE_RANGE(16); +#if BITS_PER_LONG == 64 + TEST_CASTABLE_TO_TYPE_RANGE(32); +#endif + kunit_info(test, "%d castable_to_type() tests finished\n", count); + +#undef TEST_CASTABLE_TO_TYPE_RANGE +#undef TEST_CASTABLE_TO_TYPE_VAR +#undef TEST_CASTABLE_TO_TYPE +} + static struct kunit_case overflow_test_cases[] = { KUNIT_CASE(u8_u8__u8_overflow_test), KUNIT_CASE(s8_s8__s8_overflow_test), @@ -706,6 +1089,9 @@ static struct kunit_case overflow_test_cases[] = { KUNIT_CASE(overflow_shift_test), KUNIT_CASE(overflow_allocation_test), KUNIT_CASE(overflow_size_helpers_test), + KUNIT_CASE(overflows_type_test), + KUNIT_CASE(same_type_test), + KUNIT_CASE(castable_to_type_test), {} }; From patchwork Wed Sep 28 08:12:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12991761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26B7FC6FA82 for ; Wed, 28 Sep 2022 08:14:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234054AbiI1IOj (ORCPT ); Wed, 28 Sep 2022 04:14:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40988 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234032AbiI1IOR (ORCPT ); Wed, 28 Sep 2022 04:14:17 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAA251F0CF2; Wed, 28 Sep 2022 01:14:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664352842; x=1695888842; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KruYREkjAkZtUD484MLHkEJTyb9HappQE7oKXOHnaIc=; b=bfP1qFK/lqepzO7lTmSkcj/kK2/8ntJj+5rUOxrWWxnvm2tixiLgV5yM TCbb5u9F+i5N2ASgW+Mmf/iQkbXXOuHNVwXaeWRsjFfnGGeMw/6cyrFjo F+MfhYUWBAvPxxbrDIoUyrmzzx66haUh6uuv3XPwf5u8i7qSONEp3PN+3 lYgvSbaf8f7BgOH2o0NjMkP1m/rbZmEr+l4hdd3xUIuBu4T43gsxzwa61 1l63hnPyORU3AvofhIiTiWFe7wy7utPmY85faDaozqyxJejcc3ldF1GC7 oZTUdYxHfUtsIN6bcbYFH2wI+KNsBmEML3qq/CrTYtnTfJed9maCWIoPg g==; X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="302451120" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="302451120" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:00 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="621836264" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="621836264" Received: from maciejos-mobl.ger.corp.intel.com (HELO paris.ger.corp.intel.com) ([10.249.147.47]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:13:51 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, mchehab@kernel.org, chris@chris-wilson.co.uk, matthew.auld@intel.com, thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, nirmoy.das@intel.com, airlied@redhat.com, daniel@ffwll.ch, andi.shyti@linux.intel.com, andrzej.hajda@intel.com, keescook@chromium.org, mauro.chehab@linux.intel.com, linux@rasmusvillemoes.dk, vitor@massaru.org, dlatypov@google.com, ndesaulniers@google.com, trix@redhat.com, llvm@lists.linux.dev, linux-hardening@vger.kernel.org, linux-sparse@vger.kernel.org, nathan@kernel.org, gustavoars@kernel.org, luc.vanoostenryck@gmail.com Subject: [PATCH v13 4/9] drm/i915/gem: Typecheck page lookups Date: Wed, 28 Sep 2022 11:12:55 +0300 Message-Id: <20220928081300.101516-5-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> References: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Chris Wilson We need to check that we avoid integer overflows when looking up a page, and so fix all the instances where we have mistakenly used a plain integer instead of a more suitable long. Be pedantic and add integer typechecking to the lookup so that we can be sure that we are safe. And it also uses pgoff_t as our page lookups must remain compatible with the page cache, pgoff_t is currently exactly unsigned long. v2: Move added i915_utils's macro into drm_util header (Jani N) v3: Make not use the same macro name on a function. (Mauro) For kernel-doc, macros and functions are handled in the same namespace, the same macro name on a function prevents ever adding documentation for it. v4: Add kernel-doc markups to the kAPI functions and macros (Mauoro) v5: Fix an alignment to match open parenthesis v6: Rebase v10: Use assert_typable instead of exactly_pgoff_t() macro. (Kees) v11: Change the use of assert_typable to assert_same_typable (G.G) v12: Change to use static_assert(__castable_to_type(n ,T)) style since the assert_same_typable() macro has been dropped. (G.G) v13: Change the use of __castable_to_type() to castable_to_type() Remove an unnecessary header include line. (G.G) Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Matthew Auld Cc: Thomas Hellström Cc: Kees Cook Reviewed-by: Nirmoy Das (v2) Reviewed-by: Mauro Carvalho Chehab (v3) Reviewed-by: Andrzej Hajda (v5) --- drivers/gpu/drm/i915/gem/i915_gem_object.c | 7 +- drivers/gpu/drm/i915/gem/i915_gem_object.h | 293 ++++++++++++++++-- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 27 +- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +- .../drm/i915/gem/selftests/i915_gem_context.c | 12 +- .../drm/i915/gem/selftests/i915_gem_mman.c | 8 +- .../drm/i915/gem/selftests/i915_gem_object.c | 8 +- drivers/gpu/drm/i915/i915_gem.c | 18 +- drivers/gpu/drm/i915/i915_vma.c | 8 +- 9 files changed, 322 insertions(+), 61 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.c b/drivers/gpu/drm/i915/gem/i915_gem_object.c index 7ff9c7877bec..29ed0ec05d12 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.c @@ -413,10 +413,11 @@ void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj, static void i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + pgoff_t idx = offset >> PAGE_SHIFT; void *src_map; void *src_ptr; - src_map = kmap_atomic(i915_gem_object_get_page(obj, offset >> PAGE_SHIFT)); + src_map = kmap_atomic(i915_gem_object_get_page(obj, idx)); src_ptr = src_map + offset_in_page(offset); if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_READ)) @@ -429,9 +430,10 @@ i915_gem_object_read_from_page_kmap(struct drm_i915_gem_object *obj, u64 offset, static void i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + pgoff_t idx = offset >> PAGE_SHIFT; + dma_addr_t dma = i915_gem_object_get_dma_address(obj, idx); void __iomem *src_map; void __iomem *src_ptr; - dma_addr_t dma = i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT); src_map = io_mapping_map_wc(&obj->mm.region->iomap, dma - obj->mm.region->region.start, @@ -460,6 +462,7 @@ i915_gem_object_read_from_page_iomap(struct drm_i915_gem_object *obj, u64 offset */ int i915_gem_object_read_from_page(struct drm_i915_gem_object *obj, u64 offset, void *dst, int size) { + GEM_BUG_ON(overflows_type(offset >> PAGE_SHIFT, pgoff_t)); GEM_BUG_ON(offset >= obj->base.size); GEM_BUG_ON(offset_in_page(offset) > PAGE_SIZE - size); GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj)); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index a3b7551a57fc..26e7f86dbed9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -27,8 +27,10 @@ enum intel_region_id; * spot such a local variable, please consider fixing! * * Aside from our own locals (for which we have no excuse!): - * - sg_table embeds unsigned int for num_pages - * - get_user_pages*() mixed ints with longs + * - sg_table embeds unsigned int for nents + * + * We can check for invalidly typed locals with typecheck(), see for example + * i915_gem_object_get_sg(). */ #define GEM_CHECK_SIZE_OVERFLOW(sz) \ GEM_WARN_ON((sz) >> PAGE_SHIFT > INT_MAX) @@ -363,44 +365,289 @@ i915_gem_object_get_tile_row_size(const struct drm_i915_gem_object *obj) int i915_gem_object_set_tiling(struct drm_i915_gem_object *obj, unsigned int tiling, unsigned int stride); +/** + * __i915_gem_object_page_iter_get_sg - helper to find the target scatterlist + * pointer and the target page position using pgoff_t n input argument and + * i915_gem_object_page_iter + * @obj: i915 GEM buffer object + * @iter: i915 GEM buffer object page iterator + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * Context: Takes and releases the mutex lock of the i915_gem_object_page_iter. + * Takes and releases the RCU lock to search the radix_tree of + * i915_gem_object_page_iter. + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * Recommended to use wrapper macro: i915_gem_object_page_iter_get_sg() + */ struct scatterlist * -__i915_gem_object_get_sg(struct drm_i915_gem_object *obj, - struct i915_gem_object_page_iter *iter, - unsigned int n, - unsigned int *offset, bool dma); +__i915_gem_object_page_iter_get_sg(struct drm_i915_gem_object *obj, + struct i915_gem_object_page_iter *iter, + pgoff_t n, + unsigned int *offset); +/** + * i915_gem_object_page_iter_get_sg - wrapper macro for + * __i915_gem_object_page_iter_get_sg() + * @obj: i915 GEM buffer object + * @it: i915 GEM buffer object page iterator + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * Context: Takes and releases the mutex lock of the i915_gem_object_page_iter. + * Takes and releases the RCU lock to search the radix_tree of + * i915_gem_object_page_iter. + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_page_iter_get_sg(). + */ +#define i915_gem_object_page_iter_get_sg(obj, it, n, offset) ({ \ + static_assert(castable_to_type(n , pgoff_t)); \ + __i915_gem_object_page_iter_get_sg(obj, it, n, offset); \ +}) + +/** + * __i915_gem_object_get_sg - helper to find the target scatterlist + * pointer and the target page position using pgoff_t n input argument and + * drm_i915_gem_object. It uses an internal shmem scatterlist lookup function. + * @obj: i915 GEM buffer object + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * It uses drm_i915_gem_object's internal shmem scatterlist lookup function as + * i915_gem_object_page_iter and calls __i915_gem_object_page_iter_get_sg(). + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * Recommended to use wrapper macro: i915_gem_object_get_sg() + * See also __i915_gem_object_page_iter_get_sg() + */ static inline struct scatterlist * -i915_gem_object_get_sg(struct drm_i915_gem_object *obj, - unsigned int n, - unsigned int *offset) +__i915_gem_object_get_sg(struct drm_i915_gem_object *obj, pgoff_t n, + unsigned int *offset) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_page, n, offset, false); + return __i915_gem_object_page_iter_get_sg(obj, &obj->mm.get_page, n, offset); } +/** + * i915_gem_object_get_sg - wrapper macro for __i915_gem_object_get_sg() + * @obj: i915 GEM buffer object + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_sg(). + * See also __i915_gem_object_page_iter_get_sg() + */ +#define i915_gem_object_get_sg(obj, n, offset) ({ \ + static_assert(castable_to_type(n , pgoff_t)); \ + __i915_gem_object_get_sg(obj, n, offset); \ +}) + +/** + * __i915_gem_object_get_sg_dma - helper to find the target scatterlist + * pointer and the target page position using pgoff_t n input argument and + * drm_i915_gem_object. It uses an internal DMA mapped scatterlist lookup function + * @obj: i915 GEM buffer object + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * It uses drm_i915_gem_object's internal DMA mapped scatterlist lookup function + * as i915_gem_object_page_iter and calls __i915_gem_object_page_iter_get_sg(). + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * Recommended to use wrapper macro: i915_gem_object_get_sg_dma() + * See also __i915_gem_object_page_iter_get_sg() + */ static inline struct scatterlist * -i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, - unsigned int n, - unsigned int *offset) +__i915_gem_object_get_sg_dma(struct drm_i915_gem_object *obj, pgoff_t n, + unsigned int *offset) { - return __i915_gem_object_get_sg(obj, &obj->mm.get_dma_page, n, offset, true); + return __i915_gem_object_page_iter_get_sg(obj, &obj->mm.get_dma_page, n, offset); } +/** + * i915_gem_object_get_sg_dma - wrapper macro for __i915_gem_object_get_sg_dma() + * @obj: i915 GEM buffer object + * @n: page offset + * @offset: searched physical offset, + * it will be used for returning physical page offset value + * + * Returns: + * The target scatterlist pointer and the target page position. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_sg_dma(). + * See also __i915_gem_object_page_iter_get_sg() + */ +#define i915_gem_object_get_sg_dma(obj, n, offset) ({ \ + static_assert(castable_to_type(n , pgoff_t)); \ + __i915_gem_object_get_sg_dma(obj, n, offset); \ +}) + +/** + * __i915_gem_object_get_page - helper to find the target page with a page offset + * @obj: i915 GEM buffer object + * @n: page offset + * + * It uses drm_i915_gem_object's internal shmem scatterlist lookup function as + * i915_gem_object_page_iter and calls __i915_gem_object_page_iter_get_sg() + * internally. + * + * Returns: + * The target page pointer. + * + * Recommended to use wrapper macro: i915_gem_object_get_page() + * See also __i915_gem_object_page_iter_get_sg() + */ struct page * -i915_gem_object_get_page(struct drm_i915_gem_object *obj, - unsigned int n); +__i915_gem_object_get_page(struct drm_i915_gem_object *obj, pgoff_t n); +/** + * i915_gem_object_get_page - wrapper macro for __i915_gem_object_get_page + * @obj: i915 GEM buffer object + * @n: page offset + * + * Returns: + * The target page pointer. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_page(). + * See also __i915_gem_object_page_iter_get_sg() + */ +#define i915_gem_object_get_page(obj, n) ({ \ + static_assert(castable_to_type(n , pgoff_t)); \ + __i915_gem_object_get_page(obj, n); \ +}) + +/** + * __i915_gem_object_get_dirty_page - helper to find the target page with a page + * offset + * @obj: i915 GEM buffer object + * @n: page offset + * + * It works like i915_gem_object_get_page(), but it marks the returned page dirty. + * + * Returns: + * The target page pointer. + * + * Recommended to use wrapper macro: i915_gem_object_get_dirty_page() + * See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_page() + */ struct page * -i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, - unsigned int n); +__i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, pgoff_t n); + +/** + * i915_gem_object_get_dirty_page - wrapper macro for __i915_gem_object_get_dirty_page + * @obj: i915 GEM buffer object + * @n: page offset + * + * Returns: + * The target page pointer. + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_dirty_page(). + * See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_page() + */ +#define i915_gem_object_get_dirty_page(obj, n) ({ \ + static_assert(castable_to_type(n , pgoff_t)); \ + __i915_gem_object_get_dirty_page(obj, n); \ +}) +/** + * __i915_gem_object_get_dma_address_len - helper to get bus addresses of + * targeted DMA mapped scatterlist from i915 GEM buffer object and it's length + * @obj: i915 GEM buffer object + * @n: page offset + * @len: DMA mapped scatterlist's DMA bus addresses length to return + * + * Returns: + * Bus addresses of targeted DMA mapped scatterlist + * + * Recommended to use wrapper macro: i915_gem_object_get_dma_address_len() + * See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_sg_dma() + */ dma_addr_t -i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, - unsigned long n, - unsigned int *len); +__i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, pgoff_t n, + unsigned int *len); +/** + * i915_gem_object_get_dma_address_len - wrapper macro for + * __i915_gem_object_get_dma_address_len + * @obj: i915 GEM buffer object + * @n: page offset + * @len: DMA mapped scatterlist's DMA bus addresses length to return + * + * Returns: + * Bus addresses of targeted DMA mapped scatterlist + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_dma_address_len(). + * See also __i915_gem_object_page_iter_get_sg() and + * __i915_gem_object_get_dma_address_len() + */ +#define i915_gem_object_get_dma_address_len(obj, n, len) ({ \ + static_assert(castable_to_type(n , pgoff_t)); \ + __i915_gem_object_get_dma_address_len(obj, n, len); \ +}) + +/** + * __i915_gem_object_get_dma_address - helper to get bus addresses of + * targeted DMA mapped scatterlist from i915 GEM buffer object + * @obj: i915 GEM buffer object + * @n: page offset + * + * Returns: + * Bus addresses of targeted DMA mapped scatterlis + * + * Recommended to use wrapper macro: i915_gem_object_get_dma_address() + * See also __i915_gem_object_page_iter_get_sg() and __i915_gem_object_get_sg_dma() + */ dma_addr_t -i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, - unsigned long n); +__i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, pgoff_t n); + +/** + * i915_gem_object_get_dma_address - wrapper macro for + * __i915_gem_object_get_dma_address + * @obj: i915 GEM buffer object + * @n: page offset + * + * Returns: + * Bus addresses of targeted DMA mapped scatterlist + * + * In order to avoid the truncation of the input parameter, it checks the page + * offset n's type from the input parameter before calling + * __i915_gem_object_get_dma_address(). + * See also __i915_gem_object_page_iter_get_sg() and + * __i915_gem_object_get_dma_address() + */ +#define i915_gem_object_get_dma_address(obj, n) ({ \ + static_assert(castable_to_type(n , pgoff_t)); \ + __i915_gem_object_get_dma_address(obj, n); \ +}) void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj, struct sg_table *pages, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 16f845663ff2..383dc9d63e46 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -522,14 +522,16 @@ void __i915_gem_object_release_map(struct drm_i915_gem_object *obj) } struct scatterlist * -__i915_gem_object_get_sg(struct drm_i915_gem_object *obj, - struct i915_gem_object_page_iter *iter, - unsigned int n, - unsigned int *offset, - bool dma) +__i915_gem_object_page_iter_get_sg(struct drm_i915_gem_object *obj, + struct i915_gem_object_page_iter *iter, + pgoff_t n, + unsigned int *offset) + { - struct scatterlist *sg; + const bool dma = iter == &obj->mm.get_dma_page || + iter == &obj->ttm.get_io_page; unsigned int idx, count; + struct scatterlist *sg; might_sleep(); GEM_BUG_ON(n >= obj->base.size >> PAGE_SHIFT); @@ -637,7 +639,7 @@ __i915_gem_object_get_sg(struct drm_i915_gem_object *obj, } struct page * -i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n) +__i915_gem_object_get_page(struct drm_i915_gem_object *obj, pgoff_t n) { struct scatterlist *sg; unsigned int offset; @@ -650,8 +652,7 @@ i915_gem_object_get_page(struct drm_i915_gem_object *obj, unsigned int n) /* Like i915_gem_object_get_page(), but mark the returned page dirty */ struct page * -i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, - unsigned int n) +__i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, pgoff_t n) { struct page *page; @@ -663,9 +664,8 @@ i915_gem_object_get_dirty_page(struct drm_i915_gem_object *obj, } dma_addr_t -i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, - unsigned long n, - unsigned int *len) +__i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, + pgoff_t n, unsigned int *len) { struct scatterlist *sg; unsigned int offset; @@ -679,8 +679,7 @@ i915_gem_object_get_dma_address_len(struct drm_i915_gem_object *obj, } dma_addr_t -i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, - unsigned long n) +__i915_gem_object_get_dma_address(struct drm_i915_gem_object *obj, pgoff_t n) { return i915_gem_object_get_dma_address_len(obj, n, NULL); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 3dc6acfcf4ec..0f9dd790ceed 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -694,7 +694,7 @@ static unsigned long i915_ttm_io_mem_pfn(struct ttm_buffer_object *bo, GEM_WARN_ON(bo->ttm); base = obj->mm.region->iomap.base - obj->mm.region->region.start; - sg = __i915_gem_object_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs, true); + sg = i915_gem_object_page_iter_get_sg(obj, &obj->ttm.get_io_page, page_offset, &ofs); return ((base + sg_dma_address(sg)) >> PAGE_SHIFT) + ofs; } diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c index c6ad67b90e8a..a18a890e681f 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c @@ -455,7 +455,8 @@ static int gpu_fill(struct intel_context *ce, static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) { const bool has_llc = HAS_LLC(to_i915(obj->base.dev)); - unsigned int n, m, need_flush; + unsigned int need_flush; + unsigned long n, m; int err; i915_gem_object_lock(obj, NULL); @@ -485,7 +486,8 @@ static int cpu_fill(struct drm_i915_gem_object *obj, u32 value) static noinline int cpu_check(struct drm_i915_gem_object *obj, unsigned int idx, unsigned int max) { - unsigned int n, m, needs_flush; + unsigned int needs_flush; + unsigned long n; int err; i915_gem_object_lock(obj, NULL); @@ -494,7 +496,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, goto out_unlock; for (n = 0; n < real_page_count(obj); n++) { - u32 *map; + u32 *map, m; map = kmap_atomic(i915_gem_object_get_page(obj, n)); if (needs_flush & CLFLUSH_BEFORE) @@ -502,7 +504,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (m = 0; m < max; m++) { if (map[m] != m) { - pr_err("%pS: Invalid value at object %d page %d/%ld, offset %d/%d: found %x expected %x\n", + pr_err("%pS: Invalid value at object %d page %ld/%ld, offset %d/%d: found %x expected %x\n", __builtin_return_address(0), idx, n, real_page_count(obj), m, max, map[m], m); @@ -513,7 +515,7 @@ static noinline int cpu_check(struct drm_i915_gem_object *obj, for (; m < DW_PER_PAGE; m++) { if (map[m] != STACK_MAGIC) { - pr_err("%pS: Invalid value at object %d page %d, offset %d: found %x expected %x (uninitialised)\n", + pr_err("%pS: Invalid value at object %d page %ld, offset %d: found %x expected %x (uninitialised)\n", __builtin_return_address(0), idx, n, m, map[m], STACK_MAGIC); err = -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 1cae24349a96..e56e0fe88515 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -96,11 +96,11 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_gtt_view view; struct i915_vma *vma; + unsigned long offset; unsigned long page; u32 __iomem *io; struct page *p; unsigned int n; - u64 offset; u32 *cpu; int err; @@ -157,7 +157,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj, cpu = kmap(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { - pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", + pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n", page, n, view.partial.offset, view.partial.size, @@ -213,10 +213,10 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, for_each_prime_number_from(page, 1, npages) { struct i915_gtt_view view = compute_partial_view(obj, page, MIN_CHUNK_PAGES); + unsigned long offset; u32 __iomem *io; struct page *p; unsigned int n; - u64 offset; u32 *cpu; GEM_BUG_ON(view.partial.size > nreal); @@ -253,7 +253,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj, cpu = kmap(p) + offset_in_page(offset); drm_clflush_virt_range(cpu, sizeof(*cpu)); if (*cpu != (u32)page) { - pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n", + pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%lu + %u [0x%lx]) of 0x%x, found 0x%x\n", page, n, view.partial.offset, view.partial.size, diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c index bdf5bb40ccf1..19e374f68ff7 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_object.c @@ -33,10 +33,10 @@ static int igt_gem_object(void *arg) static int igt_gem_huge(void *arg) { - const unsigned int nreal = 509; /* just to be awkward */ + const unsigned long nreal = 509; /* just to be awkward */ struct drm_i915_private *i915 = arg; struct drm_i915_gem_object *obj; - unsigned int n; + unsigned long n; int err; /* Basic sanitycheck of our huge fake object allocation */ @@ -49,7 +49,7 @@ static int igt_gem_huge(void *arg) err = i915_gem_object_pin_pages_unlocked(obj); if (err) { - pr_err("Failed to allocate %u pages (%lu total), err=%d\n", + pr_err("Failed to allocate %lu pages (%lu total), err=%d\n", nreal, obj->base.size / PAGE_SIZE, err); goto out; } @@ -57,7 +57,7 @@ static int igt_gem_huge(void *arg) for (n = 0; n < obj->base.size / PAGE_SIZE; n++) { if (i915_gem_object_get_page(obj, n) != i915_gem_object_get_page(obj, n % nreal)) { - pr_err("Page lookup mismatch at index %u [%u]\n", + pr_err("Page lookup mismatch at index %lu [%lu]\n", n, n % nreal); err = -EINVAL; goto out_unpin; diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 88df9a35e0fe..68cbe6233eef 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -229,8 +229,9 @@ i915_gem_shmem_pread(struct drm_i915_gem_object *obj, struct drm_i915_gem_pread *args) { unsigned int needs_clflush; - unsigned int idx, offset; char __user *user_data; + unsigned long offset; + pgoff_t idx; u64 remain; int ret; @@ -383,13 +384,17 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj, { struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt *ggtt = to_gt(i915)->ggtt; + unsigned long remain, offset; intel_wakeref_t wakeref; struct drm_mm_node node; void __user *user_data; struct i915_vma *vma; - u64 remain, offset; int ret = 0; + if (overflows_type(args->size, remain) || + overflows_type(args->offset, offset)) + return -EINVAL; + wakeref = intel_runtime_pm_get(&i915->runtime_pm); vma = i915_gem_gtt_prepare(obj, &node, false); @@ -540,13 +545,17 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj, struct drm_i915_private *i915 = to_i915(obj->base.dev); struct i915_ggtt *ggtt = to_gt(i915)->ggtt; struct intel_runtime_pm *rpm = &i915->runtime_pm; + unsigned long remain, offset; intel_wakeref_t wakeref; struct drm_mm_node node; struct i915_vma *vma; - u64 remain, offset; void __user *user_data; int ret = 0; + if (overflows_type(args->size, remain) || + overflows_type(args->offset, offset)) + return -EINVAL; + if (i915_gem_object_has_struct_page(obj)) { /* * Avoid waking the device up if we can fallback, as @@ -654,8 +663,9 @@ i915_gem_shmem_pwrite(struct drm_i915_gem_object *obj, { unsigned int partial_cacheline_write; unsigned int needs_clflush; - unsigned int offset, idx; void __user *user_data; + unsigned long offset; + pgoff_t idx; u64 remain; int ret; diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index f17c09ead7d7..bf3a30cd02bf 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -909,7 +909,7 @@ rotate_pages(struct drm_i915_gem_object *obj, unsigned int offset, struct sg_table *st, struct scatterlist *sg) { unsigned int column, row; - unsigned int src_idx; + pgoff_t src_idx; for (column = 0; column < width; column++) { unsigned int left; @@ -1015,7 +1015,7 @@ add_padding_pages(unsigned int count, static struct scatterlist * remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj, - unsigned int offset, unsigned int alignment_pad, + unsigned long offset, unsigned int alignment_pad, unsigned int width, unsigned int height, unsigned int src_stride, unsigned int dst_stride, struct sg_table *st, struct scatterlist *sg, @@ -1074,7 +1074,7 @@ remap_tiled_color_plane_pages(struct drm_i915_gem_object *obj, static struct scatterlist * remap_contiguous_pages(struct drm_i915_gem_object *obj, - unsigned int obj_offset, + pgoff_t obj_offset, unsigned int count, struct sg_table *st, struct scatterlist *sg) { @@ -1107,7 +1107,7 @@ remap_contiguous_pages(struct drm_i915_gem_object *obj, static struct scatterlist * remap_linear_color_plane_pages(struct drm_i915_gem_object *obj, - unsigned int obj_offset, unsigned int alignment_pad, + pgoff_t obj_offset, unsigned int alignment_pad, unsigned int size, struct sg_table *st, struct scatterlist *sg, unsigned int *gtt_offset) From patchwork Wed Sep 28 08:12:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12991762 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 303C2C04A95 for ; Wed, 28 Sep 2022 08:15:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233296AbiI1IPO (ORCPT ); Wed, 28 Sep 2022 04:15:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233982AbiI1IOc (ORCPT ); Wed, 28 Sep 2022 04:14:32 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28D951F1E85; Wed, 28 Sep 2022 01:14:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664352851; x=1695888851; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LPxEcywviOKCcr4oC8fHS+hI4ha3BcDOTKfEMwQTLaI=; b=Do2+0lhKzem5fhtt2FBbl0vE7s9BhmvjUxT1fAPmVLQJSwH64X7Xnzne jh9gmHBFXH5F+MQnQ4vSwo4AUIzZzRE4K+5muDl43a/Dg6ug/iDtEgCB0 n7Zv/9K4j8DJLfrMfTyXJcAuM2nE2OZNq4W3sQ9WwmLHv3XTsBrnr/xy6 xcwo4s/tvSWPeb3hvW7ra6+GYpgZzuAkSf6s+fgBqifjjlrUGZJJNmj+Q 9De5Xq22n4efPmu2JfRhZv4FkjaG8H46W10UTtAEOJWi7Br89hnWlU0yr dBmB/mNvZ0rkhk+3G2zj/l5MQ32SUX3jpwtpjN8JD9luT9acp6BgrYloI A==; X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="327904265" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="327904265" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:10 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="621836316" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="621836316" Received: from maciejos-mobl.ger.corp.intel.com (HELO paris.ger.corp.intel.com) ([10.249.147.47]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:01 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, mchehab@kernel.org, chris@chris-wilson.co.uk, matthew.auld@intel.com, thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, nirmoy.das@intel.com, airlied@redhat.com, daniel@ffwll.ch, andi.shyti@linux.intel.com, andrzej.hajda@intel.com, keescook@chromium.org, mauro.chehab@linux.intel.com, linux@rasmusvillemoes.dk, vitor@massaru.org, dlatypov@google.com, ndesaulniers@google.com, trix@redhat.com, llvm@lists.linux.dev, linux-hardening@vger.kernel.org, linux-sparse@vger.kernel.org, nathan@kernel.org, gustavoars@kernel.org, luc.vanoostenryck@gmail.com Subject: [PATCH v13 5/9] drm/i915: Check for integer truncation on scatterlist creation Date: Wed, 28 Sep 2022 11:12:56 +0300 Message-Id: <20220928081300.101516-6-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> References: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Chris Wilson There is an impedance mismatch between the scatterlist API using unsigned int and our memory/page accounting in unsigned long. That is we may try to create a scatterlist for a large object that overflows returning a small table into which we try to fit very many pages. As the object size is under control of userspace, we have to be prudent and catch the conversion errors. To catch the implicit truncation as we switch from unsigned long into the scatterlist's unsigned int, we use overflows_type check and report E2BIG prior to the operation. This is already used in our create ioctls to indicate if the uABI request is simply too large for the backing store. Failing that type check, we have a second check at sg_alloc_table time to make sure the values we are passing into the scatterlist API are not truncated. It uses pgoff_t for locals that are dealing with page indices, in this case, the page count is the limit of the page index. And it uses safe_conversion() macro which performs a type conversion (cast) of an integer value into a new variable, checking that the destination is large enough to hold the source value. v2: Move added i915_utils's macro into drm_util header (Jani N) v5: Fix macros to be enclosed in parentheses for complex values Fix too long line warning v8: Replace safe_conversion() with check_assign() (Kees) Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Brian Welty Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Andrzej Hajda --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 6 ++++-- drivers/gpu/drm/i915/gem/i915_gem_object.h | 3 --- drivers/gpu/drm/i915/gem/i915_gem_phys.c | 4 ++++ drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 5 ++++- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 4 ++++ drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 5 ++++- drivers/gpu/drm/i915/gvt/dmabuf.c | 9 +++++---- drivers/gpu/drm/i915/i915_scatterlist.h | 11 +++++++++++ 8 files changed, 36 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index c698f95af15f..53fa27e1c950 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -37,10 +37,13 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) struct sg_table *st; struct scatterlist *sg; unsigned int sg_page_sizes; - unsigned int npages; + pgoff_t npages; /* restricted by sg_alloc_table */ int max_order; gfp_t gfp; + if (check_assign(obj->base.size >> PAGE_SHIFT, &npages)) + return -E2BIG; + max_order = MAX_ORDER; #ifdef CONFIG_SWIOTLB if (is_swiotlb_active(obj->base.dev->dev)) { @@ -67,7 +70,6 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) if (!st) return -ENOMEM; - npages = obj->base.size / PAGE_SIZE; if (sg_alloc_table(st, npages, GFP_KERNEL)) { kfree(st); return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 26e7f86dbed9..9f8e29112c31 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -26,9 +26,6 @@ enum intel_region_id; * this and catch if we ever need to fix it. In the meantime, if you do * spot such a local variable, please consider fixing! * - * Aside from our own locals (for which we have no excuse!): - * - sg_table embeds unsigned int for nents - * * We can check for invalidly typed locals with typecheck(), see for example * i915_gem_object_get_sg(). */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c index 0d0e46dae559..88ba7266a3a5 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c @@ -28,6 +28,10 @@ static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj) void *dst; int i; + /* Contiguous chunk, with a single scatterlist element */ + if (overflows_type(obj->base.size, sg->length)) + return -E2BIG; + if (GEM_WARN_ON(i915_gem_object_needs_bit17_swizzle(obj))) return -EINVAL; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index f42ca1179f37..339b0a9cf2d0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -193,13 +193,16 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj) struct drm_i915_private *i915 = to_i915(obj->base.dev); struct intel_memory_region *mem = obj->mm.region; struct address_space *mapping = obj->base.filp->f_mapping; - const unsigned long page_count = obj->base.size / PAGE_SIZE; unsigned int max_segment = i915_sg_segment_size(); struct sg_table *st; struct sgt_iter sgt_iter; + pgoff_t page_count; struct page *page; int ret; + if (check_assign(obj->base.size >> PAGE_SHIFT, &page_count)) + return -E2BIG; + /* * Assert that the object is not currently in any GPU domain. As it * wasn't in the GTT, there shouldn't be any way it could have been in diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 0f9dd790ceed..8d7c392d335c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -792,6 +792,10 @@ static int i915_ttm_get_pages(struct drm_i915_gem_object *obj) { struct ttm_place requested, busy[I915_TTM_MAX_PLACEMENTS]; struct ttm_placement placement; + pgoff_t num_pages; + + if (check_assign(obj->base.size >> PAGE_SHIFT, &num_pages)) + return -E2BIG; GEM_BUG_ON(obj->mm.n_placements > I915_TTM_MAX_PLACEMENTS); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 8423df021b71..48237e443863 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -128,13 +128,16 @@ static void i915_gem_object_userptr_drop_ref(struct drm_i915_gem_object *obj) static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) { - const unsigned long num_pages = obj->base.size >> PAGE_SHIFT; unsigned int max_segment = i915_sg_segment_size(); struct sg_table *st; unsigned int sg_page_sizes; struct page **pvec; + pgoff_t num_pages; /* limited by sg_alloc_table_from_pages_segment */ int ret; + if (check_assign(obj->base.size >> PAGE_SHIFT, &num_pages)) + return -E2BIG; + st = kmalloc(sizeof(*st), GFP_KERNEL); if (!st) return -ENOMEM; diff --git a/drivers/gpu/drm/i915/gvt/dmabuf.c b/drivers/gpu/drm/i915/gvt/dmabuf.c index 01e54b45c5c1..bc6e823584ad 100644 --- a/drivers/gpu/drm/i915/gvt/dmabuf.c +++ b/drivers/gpu/drm/i915/gvt/dmabuf.c @@ -42,8 +42,7 @@ #define GEN8_DECODE_PTE(pte) (pte & GENMASK_ULL(63, 12)) -static int vgpu_gem_get_pages( - struct drm_i915_gem_object *obj) +static int vgpu_gem_get_pages(struct drm_i915_gem_object *obj) { struct drm_i915_private *dev_priv = to_i915(obj->base.dev); struct intel_vgpu *vgpu; @@ -52,7 +51,10 @@ static int vgpu_gem_get_pages( int i, j, ret; gen8_pte_t __iomem *gtt_entries; struct intel_vgpu_fb_info *fb_info; - u32 page_num; + pgoff_t page_num; + + if (check_assign(obj->base.size >> PAGE_SHIFT, &page_num)) + return -E2BIG; fb_info = (struct intel_vgpu_fb_info *)obj->gvt_info; if (drm_WARN_ON(&dev_priv->drm, !fb_info)) @@ -66,7 +68,6 @@ static int vgpu_gem_get_pages( if (unlikely(!st)) return -ENOMEM; - page_num = obj->base.size >> PAGE_SHIFT; ret = sg_alloc_table(st, page_num, GFP_KERNEL); if (ret) { kfree(st); diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h b/drivers/gpu/drm/i915/i915_scatterlist.h index 9ddb3e743a3e..1d1802beb42b 100644 --- a/drivers/gpu/drm/i915/i915_scatterlist.h +++ b/drivers/gpu/drm/i915/i915_scatterlist.h @@ -220,4 +220,15 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res, u64 region_start, u32 page_alignment); +/* Wrap scatterlist.h to sanity check for integer truncation */ +typedef unsigned int __sg_size_t; /* see linux/scatterlist.h */ +#define sg_alloc_table(sgt, nents, gfp) \ + overflows_type(nents, __sg_size_t) ? -E2BIG \ + : ((sg_alloc_table)(sgt, (__sg_size_t)(nents), gfp)) + +#define sg_alloc_table_from_pages_segment(sgt, pages, npages, offset, size, max_segment, gfp) \ + overflows_type(npages, __sg_size_t) ? -E2BIG \ + : ((sg_alloc_table_from_pages_segment)(sgt, pages, (__sg_size_t)(npages), offset, \ + size, max_segment, gfp)) + #endif From patchwork Wed Sep 28 08:12:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12991764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C26A1C6FA86 for ; Wed, 28 Sep 2022 08:15:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234072AbiI1IPj (ORCPT ); Wed, 28 Sep 2022 04:15:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234051AbiI1IOj (ORCPT ); Wed, 28 Sep 2022 04:14:39 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 016B6BEE; Wed, 28 Sep 2022 01:14:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664352873; x=1695888873; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EcU7AaNW1bRj3YDi/xCjAHSsB7dr/NE8Y5qtQn5y8Vc=; b=PfB8reHr+RbI5RRk0ITSOaNcgiaFF+r/iqUySOzZxry0qMSKidAyvfEk lSFgoBP8Q+7F4tUs7oVgrTLTCu9BhI7vYCKbZ+Sp4P5NYASjXjGBM6Kd5 G+9rF0s20uc9nk9myg0BTGl5KJ4D/RevshVuR00YK6YmGwKPjLfVC8v+p EALP5gLOvwJuf7MQpFN3tUDJ/Mqj3B/j5dzofG90N35vye/OfbPGK99vu NjmLCn3u+Mo/EV9/DhSqNp6U6zGHzPCocq4LP3EwPnlji8Qcl5ZBtHOTI OVfNGahkHtJV9s2Z7LcsnYPqhpk6WN7lNl1aLhZVtR3z0L0uZrXWksIM0 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="300258445" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="300258445" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:19 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="621836368" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="621836368" Received: from maciejos-mobl.ger.corp.intel.com (HELO paris.ger.corp.intel.com) ([10.249.147.47]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:10 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, mchehab@kernel.org, chris@chris-wilson.co.uk, matthew.auld@intel.com, thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, nirmoy.das@intel.com, airlied@redhat.com, daniel@ffwll.ch, andi.shyti@linux.intel.com, andrzej.hajda@intel.com, keescook@chromium.org, mauro.chehab@linux.intel.com, linux@rasmusvillemoes.dk, vitor@massaru.org, dlatypov@google.com, ndesaulniers@google.com, trix@redhat.com, llvm@lists.linux.dev, linux-hardening@vger.kernel.org, linux-sparse@vger.kernel.org, nathan@kernel.org, gustavoars@kernel.org, luc.vanoostenryck@gmail.com Subject: [PATCH v13 6/9] drm/i915: Check for integer truncation on the configuration of ttm place Date: Wed, 28 Sep 2022 11:12:57 +0300 Message-Id: <20220928081300.101516-7-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> References: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org There is an impedance mismatch between the first/last valid page frame number of ttm place in unsigned and our memory/page accounting in unsigned long. As the object size is under the control of userspace, we have to be prudent and catch the conversion errors. To catch the implicit truncation as we switch from unsigned long to unsigned, we use overflows_type check and report E2BIG or overflow_type prior to the operation. v3: Not to change execution inside a macro. (Mauro) Add safe_conversion_gem_bug_on() macro and remove temporal SAFE_CONVERSION() macro. v4: Fix unhandled GEM_BUG_ON() macro call from safe_conversion_gem_bug_on() v6: Fix to follow general use case for GEM_BUG_ON(). (Jani) v7: Fix to use WARN_ON() macro where GEM_BUG_ON() macro was used. (Jani) v8: Replace safe_conversion() with check_assign() (Kees) Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Cc: Jani Nikula Reviewed-by: Nirmoy Das (v2) Reviewed-by: Mauro Carvalho Chehab (v3) Reported-by: kernel test robot Reviewed-by: Andrzej Hajda (v5) --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 6 +++--- drivers/gpu/drm/i915/intel_region_ttm.c | 17 ++++++++++++++--- 2 files changed, 17 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index 8d7c392d335c..d33f06b95c48 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -140,14 +140,14 @@ i915_ttm_place_from_region(const struct intel_memory_region *mr, if (flags & I915_BO_ALLOC_CONTIGUOUS) place->flags |= TTM_PL_FLAG_CONTIGUOUS; if (offset != I915_BO_INVALID_OFFSET) { - place->fpfn = offset >> PAGE_SHIFT; - place->lpfn = place->fpfn + (size >> PAGE_SHIFT); + WARN_ON(check_assign(offset >> PAGE_SHIFT, &place->fpfn)); + WARN_ON(check_assign(place->fpfn + (size >> PAGE_SHIFT), &place->lpfn)); } else if (mr->io_size && mr->io_size < mr->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place->flags |= TTM_PL_FLAG_TOPDOWN; } else { place->fpfn = 0; - place->lpfn = mr->io_size >> PAGE_SHIFT; + WARN_ON(check_assign(mr->io_size >> PAGE_SHIFT, &place->lpfn)); } } } diff --git a/drivers/gpu/drm/i915/intel_region_ttm.c b/drivers/gpu/drm/i915/intel_region_ttm.c index 575d67bc6ffe..37a964b20b36 100644 --- a/drivers/gpu/drm/i915/intel_region_ttm.c +++ b/drivers/gpu/drm/i915/intel_region_ttm.c @@ -209,14 +209,23 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, if (flags & I915_BO_ALLOC_CONTIGUOUS) place.flags |= TTM_PL_FLAG_CONTIGUOUS; if (offset != I915_BO_INVALID_OFFSET) { - place.fpfn = offset >> PAGE_SHIFT; - place.lpfn = place.fpfn + (size >> PAGE_SHIFT); + if (WARN_ON(check_assign(offset >> PAGE_SHIFT, &place.fpfn))) { + ret = -E2BIG; + goto out; + } + if (WARN_ON(check_assign(place.fpfn + (size >> PAGE_SHIFT), &place.lpfn))) { + ret = -E2BIG; + goto out; + } } else if (mem->io_size && mem->io_size < mem->total) { if (flags & I915_BO_ALLOC_GPU_ONLY) { place.flags |= TTM_PL_FLAG_TOPDOWN; } else { place.fpfn = 0; - place.lpfn = mem->io_size >> PAGE_SHIFT; + if (WARN_ON(check_assign(mem->io_size >> PAGE_SHIFT, &place.lpfn))) { + ret = -E2BIG; + goto out; + } } } @@ -224,6 +233,8 @@ intel_region_ttm_resource_alloc(struct intel_memory_region *mem, mock_bo.bdev = &mem->i915->bdev; ret = man->func->alloc(man, &mock_bo, &place, &res); + +out: if (ret == -ENOSPC) ret = -ENXIO; if (!ret) From patchwork Wed Sep 28 08:12:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12991763 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BC19C04A95 for ; Wed, 28 Sep 2022 08:15:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234025AbiI1IPi (ORCPT ); Wed, 28 Sep 2022 04:15:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233536AbiI1IOi (ORCPT ); Wed, 28 Sep 2022 04:14:38 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92B6410C; Wed, 28 Sep 2022 01:14:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664352871; x=1695888871; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fusilmdSQD+FIBKYeUukiHJrxKRQ3B5pPSTTQeO8rTk=; b=aRtVmJZP3dhWAMYy7vGcUpqZwcujmTa8Kc9fXWFcLVS/dPSWK6Y4kRZp 5UGBc+T+jxOTTlDpeArjG2wXxWhf3kWCnhyYZTbTUjiLld8eSnaVkcRyR U7FRjuR4sGXI7Wd1dohCMbUtB390VDD04LWQVYEr6ZAgvWjXihM6d+NWg kodgxHWBOWebCYyxIzW4NY5P+I0tEfoIzq8Cc5b9PNAp1HhYfU24UhJKQ +GCsKe/zbz+3nW5JgSZ6tvtB142hDh5GQWmNPOuM6j0qLPPVv00B2S4Co mLQ5nO2qI4xNXSEvwCdx23liYWbhj1fOY60WJj7Q03WoUlzkMLtoXGGPR w==; X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="303022875" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="303022875" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:29 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="621836391" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="621836391" Received: from maciejos-mobl.ger.corp.intel.com (HELO paris.ger.corp.intel.com) ([10.249.147.47]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:19 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, mchehab@kernel.org, chris@chris-wilson.co.uk, matthew.auld@intel.com, thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, nirmoy.das@intel.com, airlied@redhat.com, daniel@ffwll.ch, andi.shyti@linux.intel.com, andrzej.hajda@intel.com, keescook@chromium.org, mauro.chehab@linux.intel.com, linux@rasmusvillemoes.dk, vitor@massaru.org, dlatypov@google.com, ndesaulniers@google.com, trix@redhat.com, llvm@lists.linux.dev, linux-hardening@vger.kernel.org, linux-sparse@vger.kernel.org, nathan@kernel.org, gustavoars@kernel.org, luc.vanoostenryck@gmail.com Subject: [PATCH v13 7/9] drm/i915: Check if the size is too big while creating shmem file Date: Wed, 28 Sep 2022 11:12:58 +0300 Message-Id: <20220928081300.101516-8-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> References: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The __shmem_file_setup() function returns -EINVAL if size is greater than MAX_LFS_FILESIZE. To handle the same error as other code that returns -E2BIG when the size is too large, it add a code that returns -E2BIG when the size is larger than the size that can be handled. v4: If BITS_PER_LONG is 32, size > MAX_LFS_FILESIZE is always false, so it checks only when BITS_PER_LONG is 64. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab Reported-by: kernel test robot Reviewed-by: Andrzej Hajda --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 339b0a9cf2d0..ca30060e34ab 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -541,6 +541,20 @@ static int __create_shmem(struct drm_i915_private *i915, drm_gem_private_object_init(&i915->drm, obj, size); + /* XXX: The __shmem_file_setup() function returns -EINVAL if size is + * greater than MAX_LFS_FILESIZE. + * To handle the same error as other code that returns -E2BIG when + * the size is too large, we add a code that returns -E2BIG when the + * size is larger than the size that can be handled. + * If BITS_PER_LONG is 32, size > MAX_LFS_FILESIZE is always false, + * so we only needs to check when BITS_PER_LONG is 64. + * If BITS_PER_LONG is 32, E2BIG checks are processed when + * i915_gem_object_size_2big() is called before init_object() callback + * is called. + */ + if (BITS_PER_LONG == 64 && size > MAX_LFS_FILESIZE) + return -E2BIG; + if (i915->mm.gemfs) filp = shmem_file_setup_with_mnt(i915->mm.gemfs, "i915", size, flags); From patchwork Wed Sep 28 08:12:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12991765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29963C6FA86 for ; Wed, 28 Sep 2022 08:15:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234091AbiI1IPq (ORCPT ); Wed, 28 Sep 2022 04:15:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234036AbiI1IOk (ORCPT ); Wed, 28 Sep 2022 04:14:40 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5775495A7; Wed, 28 Sep 2022 01:14:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664352878; x=1695888878; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=sje3TAP90mMjJ4940AZvP8k1syPAFp/h9HzOk/O8E9U=; b=QQi247+rtNGN+h1CUiI4NoA5kygwLvjGRHySTnaQF+AqlnTKOFAaa4ax yDjpkM/2jmoTZXBnFlM1dakswlQ14ERRfsP3N9jROGwVjKAis7xwGzCqY DWeo/7oMrI/wluUJmb/IHsvEzkVe7AWV6DXI8RGc1cFWzUJLtbhukyl8b UOfD6HOrao7TZp6jcJXayHWz+XYscmGSqxSbJa34DP90ZtYoRnMtr7R8P L6xdjigtymmRhjfMQNJNCZ+pdAAfTlzPhfYU9Fga7zChl8nKkZ4nsShik RsMPwnNNIcS7SSIb3AeAOMluK8DJ23vdgJ16055cuLiPJ54ztWpKbrBgs g==; X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="281910375" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="281910375" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:37 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="621836436" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="621836436" Received: from maciejos-mobl.ger.corp.intel.com (HELO paris.ger.corp.intel.com) ([10.249.147.47]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:28 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, mchehab@kernel.org, chris@chris-wilson.co.uk, matthew.auld@intel.com, thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, nirmoy.das@intel.com, airlied@redhat.com, daniel@ffwll.ch, andi.shyti@linux.intel.com, andrzej.hajda@intel.com, keescook@chromium.org, mauro.chehab@linux.intel.com, linux@rasmusvillemoes.dk, vitor@massaru.org, dlatypov@google.com, ndesaulniers@google.com, trix@redhat.com, llvm@lists.linux.dev, linux-hardening@vger.kernel.org, linux-sparse@vger.kernel.org, nathan@kernel.org, gustavoars@kernel.org, luc.vanoostenryck@gmail.com Subject: [PATCH v13 8/9] drm/i915: Use error code as -E2BIG when the size of gem ttm object is too large Date: Wed, 28 Sep 2022 11:12:59 +0300 Message-Id: <20220928081300.101516-9-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> References: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org The ttm_bo_init_reserved() functions returns -ENOSPC if the size is too big to add vma. The direct function that returns -ENOSPC is drm_mm_insert_node_in_range(). To handle the same error as other code returning -E2BIG when the size is too large, it converts return value to -E2BIG. Signed-off-by: Gwan-gyeong Mun Cc: Chris Wilson Cc: Matthew Auld Cc: Thomas Hellström Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Andrzej Hajda --- drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c index d33f06b95c48..a2557f1ecbce 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c @@ -1243,6 +1243,17 @@ int __i915_gem_ttm_object_init(struct intel_memory_region *mem, ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj), bo_type, &i915_sys_placement, page_size >> PAGE_SHIFT, &ctx, NULL, NULL, i915_ttm_bo_destroy); + + /* + * XXX: The ttm_bo_init_reserved() functions returns -ENOSPC if the size + * is too big to add vma. The direct function that returns -ENOSPC is + * drm_mm_insert_node_in_range(). To handle the same error as other code + * that returns -E2BIG when the size is too large, it converts -ENOSPC to + * -E2BIG. + */ + if (size >> PAGE_SHIFT > INT_MAX && ret == -ENOSPC) + ret = -E2BIG; + if (ret) return i915_ttm_err_to_gem(ret); From patchwork Wed Sep 28 08:13:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Gwan-gyeong Mun X-Patchwork-Id: 12991766 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20FE9C04A95 for ; Wed, 28 Sep 2022 08:16:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233980AbiI1IQU (ORCPT ); Wed, 28 Sep 2022 04:16:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42416 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233972AbiI1IP3 (ORCPT ); Wed, 28 Sep 2022 04:15:29 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 924FE1C40F; Wed, 28 Sep 2022 01:14:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664352891; x=1695888891; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8mTLsr3wQJEHPys39tavADqBaVpCfMjGIDna8R9c7q4=; b=V4WpTeGPsSFJwZeu5oX+rMbuO8QTpEpqB+FNC9hBhm/1Zl3Meao3wnJX dI5d8VptR6yIFXFZZMKeVJskVbSq+WetZEpQ9sGRwbinTWyct3X3n60M1 aHdGrY1YbmnHCDBSkvJm1ShV4ndDiEZGHx3+c3X+6DfTvBGPICwSEnT+e e4x5mJXmabCm7Ds4nsHKqplyWqcEnCPXU4d6rsLeAwWlGv4jGnGI/CDk5 izExZ9OnA9SmaGmV05GwsquCG5xA8Xo0BJ74id2xJZTSVFTYwdZTq4VSt wK0p/5GRIh7HIS3iQkxuxzyObXvVw+zdYyhfLwgXw2m8uPsFq9YmRAXkI Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="301516417" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="301516417" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:47 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10483"; a="621836461" X-IronPort-AV: E=Sophos;i="5.93,351,1654585200"; d="scan'208";a="621836461" Received: from maciejos-mobl.ger.corp.intel.com (HELO paris.ger.corp.intel.com) ([10.249.147.47]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 01:14:38 -0700 From: Gwan-gyeong Mun To: intel-gfx@lists.freedesktop.org Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, mchehab@kernel.org, chris@chris-wilson.co.uk, matthew.auld@intel.com, thomas.hellstrom@linux.intel.com, jani.nikula@intel.com, nirmoy.das@intel.com, airlied@redhat.com, daniel@ffwll.ch, andi.shyti@linux.intel.com, andrzej.hajda@intel.com, keescook@chromium.org, mauro.chehab@linux.intel.com, linux@rasmusvillemoes.dk, vitor@massaru.org, dlatypov@google.com, ndesaulniers@google.com, trix@redhat.com, llvm@lists.linux.dev, linux-hardening@vger.kernel.org, linux-sparse@vger.kernel.org, nathan@kernel.org, gustavoars@kernel.org, luc.vanoostenryck@gmail.com Subject: [PATCH v13 9/9] drm/i915: Remove truncation warning for large objects Date: Wed, 28 Sep 2022 11:13:00 +0300 Message-Id: <20220928081300.101516-10-gwan-gyeong.mun@intel.com> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> References: <20220928081300.101516-1-gwan-gyeong.mun@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-hardening@vger.kernel.org From: Chris Wilson Having addressed the issues surrounding incorrect types for local variables and potential integer truncation in using the scatterlist API, we have closed all the loop holes we had previously identified with dangerously large object creation. As such, we can eliminate the warning put in place to remind us to complete the review. Signed-off-by: Chris Wilson Signed-off-by: Gwan-gyeong Mun Cc: Tvrtko Ursulin Cc: Brian Welty Cc: Matthew Auld Cc: Thomas Hellström Testcase: igt@gem_create@create-massive Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4991 Reviewed-by: Nirmoy Das Reviewed-by: Mauro Carvalho Chehab Reviewed-by: Andrzej Hajda --- drivers/gpu/drm/i915/gem/i915_gem_object.h | 15 --------------- 1 file changed, 15 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 9f8e29112c31..59a64262647b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -20,25 +20,10 @@ enum intel_region_id; -/* - * XXX: There is a prevalence of the assumption that we fit the - * object's page count inside a 32bit _signed_ variable. Let's document - * this and catch if we ever need to fix it. In the meantime, if you do - * spot such a local variable, please consider fixing! - * - * We can check for invalidly typed locals with typecheck(), see for example - * i915_gem_object_get_sg(). - */ -#define GEM_CHECK_SIZE_OVERFLOW(sz) \ - GEM_WARN_ON((sz) >> PAGE_SHIFT > INT_MAX) - static inline bool i915_gem_object_size_2big(u64 size) { struct drm_i915_gem_object *obj; - if (GEM_CHECK_SIZE_OVERFLOW(size)) - return true; - if (overflows_type(size, obj->base.size)) return true;