From patchwork Thu Feb 24 05:28:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Verma, Vishal L" X-Patchwork-Id: 12757851 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7AE7C433F5 for ; Thu, 24 Feb 2022 05:28:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230133AbiBXF2q (ORCPT ); Thu, 24 Feb 2022 00:28:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230243AbiBXF2p (ORCPT ); Thu, 24 Feb 2022 00:28:45 -0500 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 854F7175853 for ; Wed, 23 Feb 2022 21:28:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1645680496; x=1677216496; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=/zZ7yTw1hHT2Ai7RyNEMmpKKL7pXVpOZD3bbYkTy+zI=; b=LNevWQ3443Eatu+HsEiIruwnISJa9P4dYh8L8kz17DMYPwKhew5N11By iNVgjwtWdL5Mr2AQ9YShlqexj+KGzWNJvcboAdhnLaxpScwTIYVSnAUNU mANXFEC/YQ5PJqxplqvmN7hPMAJdMpVQ6OaYkKY94sVRPykNrblSXHb08 15InLxVWlAILFKnnPhLBbawBH5SuHyrT/BYpAzG8N7badMcewBW67ACh3 MmDNRl5jWfTMi0oufOxSsV8PY6mQY4J/P3198rUpDQL3/9ktOEsRR3TOv R/uVcOVjLw6+PmhF8q96aniY4lL2MkMaKjWNAGiKRfG0viCOyJBnnr592 Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10267"; a="312870344" X-IronPort-AV: E=Sophos;i="5.88,393,1635231600"; d="scan'208";a="312870344" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2022 21:28:15 -0800 X-IronPort-AV: E=Sophos;i="5.88,393,1635231600"; d="scan'208";a="508733769" Received: from mmgiotto-mobl.amr.corp.intel.com (HELO vverma7-desk.amr.corp.intel.com) ([10.212.17.173]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Feb 2022 21:28:15 -0800 From: Vishal Verma To: Cc: Joao Martins , , Dan Williams , Vishal Verma Subject: [ndctl PATCH] util/size.h: fix build for older compilers Date: Wed, 23 Feb 2022 22:28:05 -0700 Message-Id: <20220224052805.2462449-1-vishal.l.verma@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=6909; h=from:subject; bh=/zZ7yTw1hHT2Ai7RyNEMmpKKL7pXVpOZD3bbYkTy+zI=; b=owGbwMvMwCXGf25diOft7jLG02pJDEni4hFxa4PXSU2MeVymy/rc0nBZFPPs8h+tj+c8rJymZj2l dPrejlIWBjEuBlkxRZa/ez4yHpPbns8TmOAIM4eVCWQIAxenAEzEWZfhf42G6sS+lRGGbw/s2m/F+7 5+ff17bvOXWX+KXyfuj9I9+Z2RYXKB+/RnB5hOhzK8Ljyx1V13/VMNpxns2/W/ObIlP+E0ZAAA X-Developer-Key: i=vishal.l.verma@intel.com; a=openpgp; fpr=F8682BE134C67A12332A2ED07AFA61BEA3B84DFF Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Add a fallback for older compilers that lack __builtin_add_overflow() and friends. Commit 7aa7c7be6e80 ("util: add the struct_size() helper from the kernel") which added these helpers from the kernel neglected to copy over the fallback code. Fixes: 7aa7c7be6e80 ("util: add the struct_size() helper from the kernel") Reported-by: Joao Martins Signed-off-by: Vishal Verma Reviewed-by: Joao Martins --- util/size.h | 163 ++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 159 insertions(+), 4 deletions(-) base-commit: 3e4a66f0dfb02046f6d3375d637840b6da9c71d1 diff --git a/util/size.h b/util/size.h index e72467f..1cb0669 100644 --- a/util/size.h +++ b/util/size.h @@ -6,6 +6,7 @@ #include #include #include +#include #define SZ_1K 0x00000400 #define SZ_4K 0x00001000 @@ -43,23 +44,177 @@ static inline bool is_power_of_2(unsigned long long v) * alias for __builtin_add_overflow, but add type checks similar to * below. */ -#define check_add_overflow(a, b, d) (({ \ +#define is_signed_type(type) (((type)(-1)) < (type)1) +#define __type_half_max(type) ((type)1 << (8*sizeof(type) - 1 - is_signed_type(type))) +#define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T))) +#define type_min(T) ((T)((T)-type_max(T)-(T)1)) + +#if GCC_VERSION >= 50100 +#define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 +#endif + +#if __clang__ && \ + __has_builtin(__builtin_mul_overflow) && \ + __has_builtin(__builtin_add_overflow) +#define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 +#endif + +#if COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW + +#define check_add_overflow(a, b, d) ({ \ typeof(a) __a = (a); \ typeof(b) __b = (b); \ typeof(d) __d = (d); \ (void) (&__a == &__b); \ (void) (&__a == __d); \ __builtin_add_overflow(__a, __b, __d); \ -})) +}) -#define check_mul_overflow(a, b, d) (({ \ +#define check_sub_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + __builtin_sub_overflow(__a, __b, __d); \ +}) + +#define check_mul_overflow(a, b, d) ({ \ typeof(a) __a = (a); \ typeof(b) __b = (b); \ typeof(d) __d = (d); \ (void) (&__a == &__b); \ (void) (&__a == __d); \ __builtin_mul_overflow(__a, __b, __d); \ -})) +}) + + +#else /* !COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW */ + +/* Checking for unsigned overflow is relatively easy without causing UB. */ +#define __unsigned_add_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = __a + __b; \ + *__d < __a; \ +}) +#define __unsigned_sub_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = __a - __b; \ + __a < __b; \ +}) +/* + * If one of a or b is a compile-time constant, this avoids a division. + */ +#define __unsigned_mul_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = __a * __b; \ + __builtin_constant_p(__b) ? \ + __b > 0 && __a > type_max(typeof(__a)) / __b : \ + __a > 0 && __b > type_max(typeof(__b)) / __a; \ +}) + +/* + * For signed types, detecting overflow is much harder, especially if + * we want to avoid UB. But the interface of these macros is such that + * we must provide a result in *d, and in fact we must produce the + * result promised by gcc's builtins, which is simply the possibly + * wrapped-around value. Fortunately, we can just formally do the + * operations in the widest relevant unsigned type (u64) and then + * truncate the result - gcc is smart enough to generate the same code + * with and without the (u64) casts. + */ + +/* + * Adding two signed integers can overflow only if they have the same + * sign, and overflow has happened iff the result has the opposite + * sign. + */ +#define __signed_add_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = (u64)__a + (u64)__b; \ + (((~(__a ^ __b)) & (*__d ^ __a)) \ + & type_min(typeof(__a))) != 0; \ +}) + +/* + * Subtraction is similar, except that overflow can now happen only + * when the signs are opposite. In this case, overflow has happened if + * the result has the opposite sign of a. + */ +#define __signed_sub_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = (u64)__a - (u64)__b; \ + ((((__a ^ __b)) & (*__d ^ __a)) \ + & type_min(typeof(__a))) != 0; \ +}) + +/* + * Signed multiplication is rather hard. gcc always follows C99, so + * division is truncated towards 0. This means that we can write the + * overflow check like this: + * + * (a > 0 && (b > MAX/a || b < MIN/a)) || + * (a < -1 && (b > MIN/a || b < MAX/a) || + * (a == -1 && b == MIN) + * + * The redundant casts of -1 are to silence an annoying -Wtype-limits + * (included in -Wextra) warning: When the type is u8 or u16, the + * __b_c_e in check_mul_overflow obviously selects + * __unsigned_mul_overflow, but unfortunately gcc still parses this + * code and warns about the limited range of __b. + */ + +#define __signed_mul_overflow(a, b, d) ({ \ + typeof(a) __a = (a); \ + typeof(b) __b = (b); \ + typeof(d) __d = (d); \ + typeof(a) __tmax = type_max(typeof(a)); \ + typeof(a) __tmin = type_min(typeof(a)); \ + (void) (&__a == &__b); \ + (void) (&__a == __d); \ + *__d = (u64)__a * (u64)__b; \ + (__b > 0 && (__a > __tmax/__b || __a < __tmin/__b)) || \ + (__b < (typeof(__b))-1 && (__a > __tmin/__b || __a < __tmax/__b)) || \ + (__b == (typeof(__b))-1 && __a == __tmin); \ +}) + + +#define check_add_overflow(a, b, d) \ + __builtin_choose_expr(is_signed_type(typeof(a)), \ + __signed_add_overflow(a, b, d), \ + __unsigned_add_overflow(a, b, d)) + +#define check_sub_overflow(a, b, d) \ + __builtin_choose_expr(is_signed_type(typeof(a)), \ + __signed_sub_overflow(a, b, d), \ + __unsigned_sub_overflow(a, b, d)) + +#define check_mul_overflow(a, b, d) \ + __builtin_choose_expr(is_signed_type(typeof(a)), \ + __signed_mul_overflow(a, b, d), \ + __unsigned_mul_overflow(a, b, d)) + +#endif /* * Compute a*b+c, returning SIZE_MAX on overflow. Internal helper for