From patchwork Tue Oct 18 14:00:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13010585 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69422C43217 for ; Tue, 18 Oct 2022 14:02:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230291AbiJROCh (ORCPT ); Tue, 18 Oct 2022 10:02:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230239AbiJROCf (ORCPT ); Tue, 18 Oct 2022 10:02:35 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D5C9DD0187; Tue, 18 Oct 2022 07:02:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666101754; x=1697637754; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=if46w95X6d/6l0PtVaLFrbbviIkWXvjVlHZZP7lFcbc=; b=Ass0jzhZnuapdaxORYY4OoIqSrdX2CJWq3/QhNEMpHHaV7FN8u7t9cR/ NzdEoN6TFb++T4L/l223Wm+Qd4ZhuJ2uVPnsxsyewEMDE4ckcEc2zJhoj ST5WCeW2KNfklt6kTU3GRizsJ3KPmTVOJNU5ii0Po76sxFG5RwipIpTrj nQXrzDoJQkse4vyhgsXH1v3CdPpld+aTkltBTSYrZsy3CDDzzbAWS5Zwc LDxRTNSJChwBVsqakEKLvP4LbbbFl6rk+xON+XLRzS1ER1tvHqKUHFklD 4d5avNm9M7RxFEUNAvePNWQejB4pwFj+FAyenfalZDpe1dLCefo2mLLnp w==; X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="286502853" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="286502853" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2022 07:02:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="697510388" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="697510388" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by fmsmga004.fm.intel.com with ESMTP; 18 Oct 2022 07:02:32 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 29IE2TUL011675; Tue, 18 Oct 2022 15:02:30 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Yury Norov , Andy Shevchenko , Rasmus Villemoes , Michal Swiatkowski , Maciej Fijalkowski , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 net-next 1/6] bitmap: try to optimize arr32 <-> bitmap on 64-bit LEs Date: Tue, 18 Oct 2022 16:00:22 +0200 Message-Id: <20221018140027.48086-2-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221018140027.48086-1-alexandr.lobakin@intel.com> References: <20221018140027.48086-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Unlike bitmap_{from,to}_arr64(), when there can be no out-of-bounds accesses (due to u64 always being no shorter than unsigned long), it can't be guaranteed with arr32s due to that on 64-bit platforms: bits BITS_TO_U32 * sizeof(u32) BITS_TO_LONGS * sizeof(long) 1-32 4 8 33-64 8 8 95-96 12 16 97-128 16 16 and so on. That is why bitmap_{from,to}_arr32() are always defined there as externs. But quite often @nbits is a compile-time constant, which means we could suggest whether it can be inlined or not at compile-time basing on the number of bits (above). So, try to determine that at compile time and, in case of both containers having the same size in bytes, resolve it to bitmap_copy_clear_tail() on Little Endian. No changes here for Big Endian or when the number of bits *really* is variable. Signed-off-by: Alexander Lobakin --- include/linux/bitmap.h | 51 ++++++++++++++++++++++++++++++------------ lib/bitmap.c | 12 +++++----- 2 files changed, 43 insertions(+), 20 deletions(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 7d6d73b78147..79d12e0f748b 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -283,24 +283,47 @@ static inline void bitmap_copy_clear_tail(unsigned long *dst, * On 32-bit systems bitmaps are represented as u32 arrays internally. On LE64 * machines the order of hi and lo parts of numbers match the bitmap structure. * In both cases conversion is not needed when copying data from/to arrays of - * u32. But in LE64 case, typecast in bitmap_copy_clear_tail() may lead - * to out-of-bound access. To avoid that, both LE and BE variants of 64-bit - * architectures are not using bitmap_copy_clear_tail(). + * u32. But in LE64 case, typecast in bitmap_copy_clear_tail() may lead to + * out-of-bound access. To avoid that, LE variant of 64-bit architectures uses + * bitmap_copy_clear_tail() only when @bitmap and @buf containers have the same + * size in memory (known at compile time), and 64-bit BEs never use it. */ -#if BITS_PER_LONG == 64 -void bitmap_from_arr32(unsigned long *bitmap, const u32 *buf, - unsigned int nbits); -void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, - unsigned int nbits); +#if BITS_PER_LONG == 32 +#define bitmap_arr32_compat(nbits) true +#elif defined(__LITTLE_ENDIAN) +#define bitmap_arr32_compat(nbits) \ + (__builtin_constant_p(nbits) && \ + BITS_TO_U32(nbits) * sizeof(u32) == \ + BITS_TO_LONGS(nbits) * sizeof(long)) #else -#define bitmap_from_arr32(bitmap, buf, nbits) \ - bitmap_copy_clear_tail((unsigned long *) (bitmap), \ - (const unsigned long *) (buf), (nbits)) -#define bitmap_to_arr32(buf, bitmap, nbits) \ - bitmap_copy_clear_tail((unsigned long *) (buf), \ - (const unsigned long *) (bitmap), (nbits)) +#define bitmap_arr32_compat(nbits) false #endif +void __bitmap_from_arr32(unsigned long *bitmap, const u32 *buf, unsigned int nbits); +void __bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, unsigned int nbits); + +static inline void bitmap_from_arr32(unsigned long *bitmap, const u32 *buf, + unsigned int nbits) +{ + const unsigned long *src = (const unsigned long *)buf; + + if (bitmap_arr32_compat(nbits)) + bitmap_copy_clear_tail(bitmap, src, nbits); + else + __bitmap_from_arr32(bitmap, buf, nbits); +} + +static inline void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, + unsigned int nbits) +{ + unsigned long *dst = (unsigned long *)buf; + + if (bitmap_arr32_compat(nbits)) + bitmap_copy_clear_tail(dst, bitmap, nbits); + else + __bitmap_to_arr32(buf, bitmap, nbits); +} + /* * On 64-bit systems bitmaps are represented as u64 arrays internally. On LE32 * machines the order of hi and lo parts of numbers match the bitmap structure. diff --git a/lib/bitmap.c b/lib/bitmap.c index 1c81413c51f8..e3eb12ff1637 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -1449,12 +1449,12 @@ EXPORT_SYMBOL_GPL(devm_bitmap_zalloc); #if BITS_PER_LONG == 64 /** - * bitmap_from_arr32 - copy the contents of u32 array of bits to bitmap + * __bitmap_from_arr32 - copy the contents of u32 array of bits to bitmap * @bitmap: array of unsigned longs, the destination bitmap * @buf: array of u32 (in host byte order), the source bitmap * @nbits: number of bits in @bitmap */ -void bitmap_from_arr32(unsigned long *bitmap, const u32 *buf, unsigned int nbits) +void __bitmap_from_arr32(unsigned long *bitmap, const u32 *buf, unsigned int nbits) { unsigned int i, halfwords; @@ -1469,15 +1469,15 @@ void bitmap_from_arr32(unsigned long *bitmap, const u32 *buf, unsigned int nbits if (nbits % BITS_PER_LONG) bitmap[(halfwords - 1) / 2] &= BITMAP_LAST_WORD_MASK(nbits); } -EXPORT_SYMBOL(bitmap_from_arr32); +EXPORT_SYMBOL(__bitmap_from_arr32); /** - * bitmap_to_arr32 - copy the contents of bitmap to a u32 array of bits + * __bitmap_to_arr32 - copy the contents of bitmap to a u32 array of bits * @buf: array of u32 (in host byte order), the dest bitmap * @bitmap: array of unsigned longs, the source bitmap * @nbits: number of bits in @bitmap */ -void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, unsigned int nbits) +void __bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, unsigned int nbits) { unsigned int i, halfwords; @@ -1492,7 +1492,7 @@ void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, unsigned int nbits) if (nbits % BITS_PER_LONG) buf[halfwords - 1] &= (u32) (UINT_MAX >> ((-nbits) & 31)); } -EXPORT_SYMBOL(bitmap_to_arr32); +EXPORT_SYMBOL(__bitmap_to_arr32); #endif #if (BITS_PER_LONG == 32) && defined(__BIG_ENDIAN) From patchwork Tue Oct 18 14:00:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13010586 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 196AEC4332F for ; Tue, 18 Oct 2022 14:02:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229909AbiJROCi (ORCPT ); Tue, 18 Oct 2022 10:02:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230359AbiJROCg (ORCPT ); Tue, 18 Oct 2022 10:02:36 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64886CF874; Tue, 18 Oct 2022 07:02:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666101755; x=1697637755; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=k+eTdXtPkqMwaDNTEUKdoz77rxk+Xe2f7Xhhd5O90Uw=; b=RYfsiW3w1rA+mF7KLK98laPAmBWCFQJL3Kc/Dzgc/aoZh62PTo3uVxdy eoDT/EQ33Mt+dSNKvDyV/GwhOZioh7U11PJaJpdWPNYHBZkhgaiZjLEB1 b2mdLCoELEJbYiJdwKQjXP60QlZADuQ4AdyUSPpCyo6eIr8YwgtYqTmSg f/WlbGz5jUZH9/pbE7aYQUqmv7KvEIBMvwkjEi4mcanV8O4G3Ri3cYgiK H/Q6S+M8d9jfofo0qJKQW8N2w3Lv9MtAZm517010WQn9YUGDSZiQpbyB3 oALt3cWrDTGffjo+JEtOkFOMFy5QV7kxbAsO7fEBa/AjjK6hcxPwYxAwy Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="286502856" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="286502856" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2022 07:02:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="697510392" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="697510392" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by fmsmga004.fm.intel.com with ESMTP; 18 Oct 2022 07:02:32 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 29IE2TUM011675; Tue, 18 Oct 2022 15:02:31 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Yury Norov , Andy Shevchenko , Rasmus Villemoes , Michal Swiatkowski , Maciej Fijalkowski , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 net-next 2/6] bitmap: add a couple more helpers to work with arrays of u32s Date: Tue, 18 Oct 2022 16:00:23 +0200 Message-Id: <20221018140027.48086-3-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221018140027.48086-1-alexandr.lobakin@intel.com> References: <20221018140027.48086-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add two new functions to work on arr32s: * bitmap_arr32_size() - takes number of bits to be stored in arr32 and returns number of bytes required to store such arr32, can be useful when allocating memory for arr32 containers; * bitmap_validate_arr32() - takes pointer to an arr32 and its size in bytes, plus expected number of bits. Ensures that the size is valid (must be a multiply of `sizeof(u32)`) and no bits past the number is set. Also add BITMAP_TO_U64() macro to help return a u64 from a DECLARE_BITMAP(1-64) (it may pick one or two longs depending on the platform). Signed-off-by: Alexander Lobakin --- include/linux/bitmap.h | 20 +++++++++++++++++++- lib/bitmap.c | 40 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+), 1 deletion(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 79d12e0f748b..c737b0fe2f41 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -7,7 +7,7 @@ #include #include #include -#include +#include #include #include @@ -75,6 +75,8 @@ struct device; * bitmap_from_arr64(dst, buf, nbits) Copy nbits from u64[] buf to dst * bitmap_to_arr32(buf, src, nbits) Copy nbits from buf to u32[] dst * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst + * bitmap_validate_arr32(buf, len, nbits) Validate u32[] buf of len bytes + * bitmap_arr32_size(nbits) Get size of u32[] arr for nbits * bitmap_get_value8(map, start) Get 8bit value from map at start * bitmap_set_value8(map, value, start) Set 8bit value to map at start * @@ -324,6 +326,20 @@ static inline void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, __bitmap_to_arr32(buf, bitmap, nbits); } +bool bitmap_validate_arr32(const u32 *arr, size_t len, size_t nbits); + +/** + * bitmap_arr32_size - determine the size of array of u32s for a number of bits + * @nbits: number of bits to store in the array + * + * Returns the size in bytes of a u32s-array needed to carry the specified + * number of bits. + */ +static inline size_t bitmap_arr32_size(size_t nbits) +{ + return array_size(BITS_TO_U32(nbits), sizeof(u32)); +} + /* * On 64-bit systems bitmaps are represented as u64 arrays internally. On LE32 * machines the order of hi and lo parts of numbers match the bitmap structure. @@ -571,9 +587,11 @@ static inline void bitmap_next_set_region(unsigned long *bitmap, */ #if __BITS_PER_LONG == 64 #define BITMAP_FROM_U64(n) (n) +#define BITMAP_TO_U64(map) ((u64)(map)[0]) #else #define BITMAP_FROM_U64(n) ((unsigned long) ((u64)(n) & ULONG_MAX)), \ ((unsigned long) ((u64)(n) >> 32)) +#define BITMAP_TO_U64(map) (((u64)(map)[1] << 32) | (u64)(map)[0]) #endif /** diff --git a/lib/bitmap.c b/lib/bitmap.c index e3eb12ff1637..e0045ecf34d6 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -1495,6 +1495,46 @@ void __bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, unsigned int nbits EXPORT_SYMBOL(__bitmap_to_arr32); #endif +/** + * bitmap_validate_arr32 - perform validation of a u32-array bitmap + * @arr: array of u32s, the dest bitmap + * @len: length of the array, in bytes + * @nbits: expected/supported number of bits in the bitmap + * + * Returns true if the array passes the checks (see below), false otherwise. + */ +bool bitmap_validate_arr32(const u32 *arr, size_t len, size_t nbits) +{ + size_t word = (nbits - 1) / BITS_PER_TYPE(u32); + u32 pos = (nbits - 1) % BITS_PER_TYPE(u32); + + /* Must consist of 1...n full u32s */ + if (!len || len % sizeof(u32)) + return false; + + /* + * If the array is shorter than expected, assume we support + * all of the bits set there. + */ + if (word >= len / sizeof(u32)) + return true; + + /* Last word must not contain any bits past the expected number */ + if (arr[word] & (u32)~GENMASK(pos, 0)) + return false; + + /* + * If the array is longer than expected, make sure all the bytes + * past the expected length are zeroed. + */ + len -= bitmap_arr32_size(nbits); + if (memchr_inv(&arr[word + 1], 0, len)) + return false; + + return true; +} +EXPORT_SYMBOL(bitmap_validate_arr32); + #if (BITS_PER_LONG == 32) && defined(__BIG_ENDIAN) /** * bitmap_from_arr64 - copy the contents of u64 array of bits to bitmap From patchwork Tue Oct 18 14:00:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13010587 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0A87C43219 for ; Tue, 18 Oct 2022 14:02:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231340AbiJROCl (ORCPT ); Tue, 18 Oct 2022 10:02:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230477AbiJROCh (ORCPT ); Tue, 18 Oct 2022 10:02:37 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28839CF845; Tue, 18 Oct 2022 07:02:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666101756; x=1697637756; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=cOdKTamNw2tvKAGbxW7BgvAt+f+G49WumYQ+CrWTe+Q=; b=oFsBDMYiC0dukIz2kfxDariEcf/6eHXyVw7AiK7IQ28yKn3QHIfdxZZD vfMh3gr8BS6Ew5cDWFo+iHi4Omt+xcXX/O2ovhsCI+tKb++yfyjfXrJnV EdyfjK5XoTuYkNES7u56xYPLjB8z2UvuVJp7K7GNLnPhSqeAFcH3A9ADO u/P8NuJa8kYMDBPsrVKjZavJ6uknW/nUBWln/I3xaFw1V8gV0d9sSm0/L S3AJ+Y4JZiU7K+MahiLQaHOOHh+qIeS0L4cFXW0lOLRR8ar1r9l5O3NcL iKo6bxwCYmOzMo4vEp/eGUHO81umDvxrJSKqIFweB2L+g8GH1XC5XzCdm g==; X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="286502861" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="286502861" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2022 07:02:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="697510397" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="697510397" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by fmsmga004.fm.intel.com with ESMTP; 18 Oct 2022 07:02:33 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 29IE2TUN011675; Tue, 18 Oct 2022 15:02:32 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Yury Norov , Andy Shevchenko , Rasmus Villemoes , Michal Swiatkowski , Maciej Fijalkowski , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 net-next 3/6] lib/test_bitmap: verify intermediate arr32 when converting <-> bitmap Date: Tue, 18 Oct 2022 16:00:24 +0200 Message-Id: <20221018140027.48086-4-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221018140027.48086-1-alexandr.lobakin@intel.com> References: <20221018140027.48086-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org When testing converting bitmaps from/to arr32, use bitmap_validate_arr32() to test whether the tail of the intermediate array was cleared correctly. Previously there were checks only for the actual bitmap generated with the double-conversion. Note that we pass bitmap_arr32_size() instead of `sizeof(arr)`, as we poison the bytes past the last used word with 0xa5s. Also, for @nbits == 0, the validation function must return false, account that case as well. Signed-off-by: Alexander Lobakin --- lib/test_bitmap.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index a8005ad3bd58..c40ab3dfa776 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -605,6 +605,7 @@ static void __init test_bitmap_arr32(void) unsigned int nbits, next_bit; u32 arr[EXP1_IN_BITS / 32]; DECLARE_BITMAP(bmap2, EXP1_IN_BITS); + bool valid; memset(arr, 0xa5, sizeof(arr)); @@ -620,6 +621,9 @@ static void __init test_bitmap_arr32(void) " tail is not safely cleared: %d\n", nbits, next_bit); + valid = bitmap_validate_arr32(arr, bitmap_arr32_size(nbits), nbits); + expect_eq_uint(!!nbits, valid); + if (nbits < EXP1_IN_BITS - 32) expect_eq_uint(arr[DIV_ROUND_UP(nbits, 32)], 0xa5a5a5a5); From patchwork Tue Oct 18 14:00:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13010588 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06F05C4332F for ; Tue, 18 Oct 2022 14:02:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231345AbiJROCm (ORCPT ); Tue, 18 Oct 2022 10:02:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231294AbiJROCh (ORCPT ); Tue, 18 Oct 2022 10:02:37 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EA1ED0189; Tue, 18 Oct 2022 07:02:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666101757; x=1697637757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KKCVdrqHKAJ3G/YmZRke57HdG0Ps2/KbJfT4bWvKeyo=; b=IvgQhARIwueb6fGExFMQ6Eg99miUgQyvF14jA76wgja1l47xS9MbIEhj b00LU0h1JE69/JEqUCcADGpGd4h/HUS+jqcAeiBlG842ciI5CjOjmDTJJ rnf6W9hRV9Rnc6duaHhQqWqEBmlCAS4E1vvP/dBZ/mVXuD83g6Gn7Qyxm RGMd6fp7BQWxGRYhFuRZ/6cZZw1/ohP5YRZvleK8kW6XAS9rnc8f/jViR Gxx5hS6R07whZyPnYTZg0KNmWUv3kPSJck1BwBkjknbIQzD8MNOdpwbA0 YLa31hv8/LJuZq0eqRAxa7Nh+WJTlMkMkEtAmwAHQvEpm2IqQXJpx1Qcq w==; X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="286502873" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="286502873" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2022 07:02:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="697510401" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="697510401" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by fmsmga004.fm.intel.com with ESMTP; 18 Oct 2022 07:02:34 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 29IE2TUO011675; Tue, 18 Oct 2022 15:02:33 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Yury Norov , Andy Shevchenko , Rasmus Villemoes , Michal Swiatkowski , Maciej Fijalkowski , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Andy Shevchenko Subject: [PATCH v2 net-next 4/6] lib/test_bitmap: test the newly added arr32 functions Date: Tue, 18 Oct 2022 16:00:25 +0200 Message-Id: <20221018140027.48086-5-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221018140027.48086-1-alexandr.lobakin@intel.com> References: <20221018140027.48086-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add a couple of trivial test cases, which will trial three newly added helpers to work with arr32s: * bitmap_validate_arr32() -- test all the branches the function can take when validating; * bitmap_arr32_size() -- sometimes is also called inside the previous one; * BITMAP_TO_U64() -- testing it casted to u32 against arr32[0]. Suggested-by: Andy Shevchenko Signed-off-by: Alexander Lobakin --- lib/test_bitmap.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index c40ab3dfa776..f168f0a79e4f 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -600,6 +600,36 @@ static void __init test_bitmap_parse(void) } } +static const struct { + DECLARE_BITMAP(bitmap, 128); + u32 nbits; + u32 msglen; + u32 exp_size; + u32 exp_valid:1; +} arr32_test_cases[] __initconst = { +#define BITMAP_ARR32_CASE(h, l, nr, len, ev, es) { \ + .bitmap = { \ + BITMAP_FROM_U64(l), \ + BITMAP_FROM_U64(h), \ + }, \ + .nbits = (nr), \ + .msglen = (len), \ + .exp_valid = (ev), \ + .exp_size = (es), \ +} + /* fail: msglen is not a multiple of 4 */ + BITMAP_ARR32_CASE(0x00000000, 0x0000accedeadfeed, 48, 6, false, 8), + /* pass: kernel supports more bits than received */ + BITMAP_ARR32_CASE(0x00000000, 0xacdcbadadd0afc18, 90, 8, true, 12), + /* fail: unsupported bits set within the last supported word */ + BITMAP_ARR32_CASE(0xfa588103, 0xd3d0a58544864a9c, 88, 12, false, 12), + /* fail: unsupported bits set past the last supported word */ + BITMAP_ARR32_CASE(0x00b84e53, 0x0000a3bafb6484f8, 64, 16, false, 8), + /* pass: kernel supports less bits than received, no unsupported set */ + BITMAP_ARR32_CASE(0x00000000, 0x848d7a2acc7ff31e, 64, 16, true, 8), +#undef BITMAP_ARR32_CASE +}; + static void __init test_bitmap_arr32(void) { unsigned int nbits, next_bit; @@ -628,6 +658,19 @@ static void __init test_bitmap_arr32(void) expect_eq_uint(arr[DIV_ROUND_UP(nbits, 32)], 0xa5a5a5a5); } + + for (u32 i = 0; i < ARRAY_SIZE(arr32_test_cases); i++) { + typeof(*arr32_test_cases) *test = &arr32_test_cases[i]; + + memset(arr, 0, sizeof(arr)); + bitmap_to_arr32(arr, test->bitmap, BYTES_TO_BITS(test->msglen)); + + valid = bitmap_validate_arr32(arr, test->msglen, test->nbits); + expect_eq_uint(test->exp_valid, valid); + + expect_eq_uint(test->exp_size, bitmap_arr32_size(test->nbits)); + expect_eq_uint((u32)BITMAP_TO_U64(test->bitmap), arr[0]); + } } static void __init test_bitmap_arr64(void) From patchwork Tue Oct 18 14:00:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13010589 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B13DC43219 for ; Tue, 18 Oct 2022 14:02:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229980AbiJROCp (ORCPT ); Tue, 18 Oct 2022 10:02:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231311AbiJROCj (ORCPT ); Tue, 18 Oct 2022 10:02:39 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42B38D018F; Tue, 18 Oct 2022 07:02:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666101758; x=1697637758; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6oDING5vGM3dV4GZO+MbDqq/vJ6YjDIWZVcH2ZWozdQ=; b=WmvyLZ/VfJRkAD3gL26gH1dXVfEn34gWER7qqu6RD3XofK8TzYN08G5i IKbrwU9vIgqWCTZMt2XJfQlhj+63M8fj0vuw06AJs/s3g6vLBYz6S6jep qgJHp95uofE0MvTYjK6TQ/5KV5yp7SSrzo0PYFNHIEpyq3dCQG16XLFz6 KSXotClnF30M3IGDjrWSxIp6M/tQz/tWeOMDlBH+AAz3J/B7LseRPE0mI avcQOo9WFGspcv5h/9kTKRbD/z0a/U13fQyaa6srLaJjlpDb1TiBExHpB kSxI6kG6kfvTuTNDbxLxhDyPwnCb3AVSufKeVSBPBT4z1pQ6bsuXF0NaT A==; X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="286502876" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="286502876" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2022 07:02:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="697510405" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="697510405" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by fmsmga004.fm.intel.com with ESMTP; 18 Oct 2022 07:02:35 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 29IE2TUP011675; Tue, 18 Oct 2022 15:02:34 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Yury Norov , Andy Shevchenko , Rasmus Villemoes , Michal Swiatkowski , Maciej Fijalkowski , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Andy Shevchenko Subject: [PATCH v2 net-next 5/6] bitops: make BYTES_TO_BITS() treewide-available Date: Tue, 18 Oct 2022 16:00:26 +0200 Message-Id: <20221018140027.48086-6-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221018140027.48086-1-alexandr.lobakin@intel.com> References: <20221018140027.48086-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Avoid open-coding that simple expression each time by moving BYTES_TO_BITS() from the probes code to to export it to the rest of the kernel. Do the same for the tools ecosystem as well (incl. its version of bitops.h). Suggested-by: Andy Shevchenko Signed-off-by: Alexander Lobakin --- include/linux/bitops.h | 1 + kernel/trace/trace_probe.c | 2 -- tools/include/linux/bitops.h | 1 + tools/perf/util/probe-finder.c | 2 -- 4 files changed, 2 insertions(+), 4 deletions(-) diff --git a/include/linux/bitops.h b/include/linux/bitops.h index 2ba557e067fe..e11f19f96853 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -20,6 +20,7 @@ #define BITS_TO_U64(nr) __KERNEL_DIV_ROUND_UP(nr, BITS_PER_TYPE(u64)) #define BITS_TO_U32(nr) __KERNEL_DIV_ROUND_UP(nr, BITS_PER_TYPE(u32)) #define BITS_TO_BYTES(nr) __KERNEL_DIV_ROUND_UP(nr, BITS_PER_TYPE(char)) +#define BYTES_TO_BITS(nb) ((nb) * BITS_PER_LONG / sizeof(long)) extern unsigned int __sw_hweight8(unsigned int w); extern unsigned int __sw_hweight16(unsigned int w); diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c index 36dff277de46..89e73eebc72c 100644 --- a/kernel/trace/trace_probe.c +++ b/kernel/trace/trace_probe.c @@ -523,8 +523,6 @@ parse_probe_arg(char *arg, const struct fetch_type *type, return ret; } -#define BYTES_TO_BITS(nb) ((BITS_PER_LONG * (nb)) / sizeof(long)) - /* Bitfield type needs to be parsed into a fetch function */ static int __parse_bitfield_probe_arg(const char *bf, const struct fetch_type *t, diff --git a/tools/include/linux/bitops.h b/tools/include/linux/bitops.h index f18683b95ea6..aee8667ce941 100644 --- a/tools/include/linux/bitops.h +++ b/tools/include/linux/bitops.h @@ -19,6 +19,7 @@ #define BITS_TO_U64(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(u64)) #define BITS_TO_U32(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(u32)) #define BITS_TO_BYTES(nr) DIV_ROUND_UP(nr, BITS_PER_TYPE(char)) +#define BYTES_TO_BITS(nb) ((nb) * BITS_PER_LONG / sizeof(long)) extern unsigned int __sw_hweight8(unsigned int w); extern unsigned int __sw_hweight16(unsigned int w); diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c index 50d861a80f57..2a0b7aacabc0 100644 --- a/tools/perf/util/probe-finder.c +++ b/tools/perf/util/probe-finder.c @@ -304,8 +304,6 @@ static int convert_variable_location(Dwarf_Die *vr_die, Dwarf_Addr addr, return ret2; } -#define BYTES_TO_BITS(nb) ((nb) * BITS_PER_LONG / sizeof(long)) - static int convert_variable_type(Dwarf_Die *vr_die, struct probe_trace_arg *tvar, const char *cast, bool user_access) From patchwork Tue Oct 18 14:00:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13010590 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2AA2C433FE for ; Tue, 18 Oct 2022 14:03:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231365AbiJRODH (ORCPT ); Tue, 18 Oct 2022 10:03:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231346AbiJROCn (ORCPT ); Tue, 18 Oct 2022 10:02:43 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77EC3CF879; Tue, 18 Oct 2022 07:02:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666101759; x=1697637759; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=RkiUUj60o6FLNMt4eAQsTiMQGRhAoNuFCHZcWK5racQ=; b=GSwbSDZwdFBV58qjnzQ5xWW691+w7gKOlTqfgAAEzXZ1ssA5tlC4+LFZ vgGNJodKc0A0lr4oqXhaJ3nVuqhs8pHFm6xb/R9R+N3boWcUcCoAkUuyA hifs0NYIBfsrDhIRHEopUn8g6sSVP1GlFy3lMCXKgI9To0SAclat28eB4 3tVKGSzbYgsJuCF4B0KGlnKQYKgOPvQMEcABbooI00Wsw+JM2jlc/6S1/ yYAuWQVsKrQEigYRg4c9BARBAFfXI7nTwG58VzpZVpVNYMMhklZEXnYoH JasgHRYRtMKSZIMCaWG3w8loOMqTM56Fd1vV8iDkIqhiAXTYWoDELy1K5 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="286502880" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="286502880" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2022 07:02:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10504"; a="697510414" X-IronPort-AV: E=Sophos;i="5.95,193,1661842800"; d="scan'208";a="697510414" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by fmsmga004.fm.intel.com with ESMTP; 18 Oct 2022 07:02:36 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 29IE2TUQ011675; Tue, 18 Oct 2022 15:02:35 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Yury Norov , Andy Shevchenko , Rasmus Villemoes , Michal Swiatkowski , Maciej Fijalkowski , Alexander Lobakin , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 net-next 6/6] netlink: add universal 'bigint' attribute type Date: Tue, 18 Oct 2022 16:00:27 +0200 Message-Id: <20221018140027.48086-7-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.37.3 In-Reply-To: <20221018140027.48086-1-alexandr.lobakin@intel.com> References: <20221018140027.48086-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add a new type of Netlink attribute -- big integer. Basically bigints are just arrays of u32s, but can carry anything, with 1 bit precision. Using variable-length arrays of a fixed type gives the following: * versatility: one type can carry scalars from u8 to u64, bitmaps, binary data etc.; * scalability: the same Netlink attribute can be changed to a wider (or shorter) data type with no compatibility issues, same for growing bitmaps; * optimization: 4-byte units don't require wasting slots for empty padding attributes (they always have natural alignment in Netlink messages). The only downside is that get/put functions sometimes are not just direct assignment inlines due to the internal representation using bitmaps (longs) and the bitmap API. Basic consumer functions/macros are: * nla_put_bigint() and nla_get_bigint() -- to easily put a bigint to an skb or get it from a received message (only pointer to an unsigned long array and the number of bits in it are needed); * nla_put_bigint_{u,be,le,net}{8,16,32,64}() -- alternatives to the already existing family to send/receive scalars using the new type (instead of distinct attr types); * nla_total_size_bigint*() -- to provide estimate size in bytes to Netlink needed to store a bigint/type; * NLA_POLICY_BIGINT*() -- to declare a Netlink policy for a bigint attribute. There are also *_bitmap() aliases for the *_bigint() helpers which have no differences and designed to distinguish bigints from bitmaps in the call sites (for readability). Netlink policy for a bigint can have an optional bitmap mask of bits supported by the code -- for example, to filter out obsolete bits removed some time ago or limit value to n bits (e.g. 53 instead of 64). Without it, Netlink will just make sure no bits past the passed number are set. Both variants can be requested from the userspace and the kernel will put a mask into a new policy attribute (%NL_POLICY_TYPE_ATTR_BIGINT_MASK). Note on including into : seems to introduce no visible compilation time regressions, make includecheck doesn't see anything illegit as well. Hiding everything inside lib/nlattr.c would require making a couple dozens optimizable inlines external, doesn't sound optimal. Suggested-by: Jakub Kicinski # NLA_BITMAP -> NLA_BIGINT Signed-off-by: Alexander Lobakin --- include/net/netlink.h | 208 ++++++++++++++++++++++++++++++++++- include/uapi/linux/netlink.h | 6 + lib/nlattr.c | 42 ++++++- net/netlink/policy.c | 40 +++++++ 4 files changed, 294 insertions(+), 2 deletions(-) diff --git a/include/net/netlink.h b/include/net/netlink.h index 4418b1981e31..2b7194e7a540 100644 --- a/include/net/netlink.h +++ b/include/net/netlink.h @@ -2,7 +2,7 @@ #ifndef __NET_NETLINK_H #define __NET_NETLINK_H -#include +#include #include #include #include @@ -180,6 +180,7 @@ enum { NLA_S32, NLA_S64, NLA_BITFIELD32, + NLA_BIGINT, NLA_REJECT, __NLA_TYPE_MAX, }; @@ -235,12 +236,15 @@ enum nla_policy_validation { * given type fits, using it verifies minimum length * just like "All other" * NLA_BITFIELD32 Unused + * NLA_BIGINT Number of bits in the big integer * NLA_REJECT Unused * All other Minimum length of attribute payload * * Meaning of validation union: * NLA_BITFIELD32 This is a 32-bit bitmap/bitselector attribute and * `bitfield32_valid' is the u32 value of valid flags + * NLA_BIGINT `bigint_mask` is a pointer to the mask of the valid + * bits of the given bigint to perform the validation. * NLA_REJECT This attribute is always rejected and `reject_message' * may point to a string to report as the error instead * of the generic one in extended ACK. @@ -327,6 +331,7 @@ struct nla_policy { s16 min, max; u8 network_byte_order:1; }; + const unsigned long *bigint_mask; int (*validate)(const struct nlattr *attr, struct netlink_ext_ack *extack); /* This entry is special, and used for the attribute at index 0 @@ -451,6 +456,35 @@ struct nla_policy { } #define NLA_POLICY_MIN_LEN(_len) NLA_POLICY_MIN(NLA_BINARY, _len) +/** + * NLA_POLICY_BIGINT - represent &nla_policy for a bigint attribute + * @nbits - number of bits in the bigint + * @... - optional pointer to a bitmap carrying a mask of supported bits + */ +#define NLA_POLICY_BIGINT(nbits, ...) { \ + .type = NLA_BIGINT, \ + .len = (nbits), \ + .bigint_mask = \ + (typeof((__VA_ARGS__ + 0) ? : NULL))(__VA_ARGS__ + 0), \ + .validation_type = (__VA_ARGS__ + 0) ? NLA_VALIDATE_MASK : 0, \ +} + +/* Simplify (and encourage) using the bigint type to send scalars */ +#define NLA_POLICY_BIGINT_TYPE(type, ...) \ + NLA_POLICY_BIGINT(BITS_PER_TYPE(type), ##__VA_ARGS__) + +#define NLA_POLICY_BIGINT_U8 NLA_POLICY_BIGINT_TYPE(u8) +#define NLA_POLICY_BIGINT_U16 NLA_POLICY_BIGINT_TYPE(u16) +#define NLA_POLICY_BIGINT_U32 NLA_POLICY_BIGINT_TYPE(u32) +#define NLA_POLICY_BIGINT_U64 NLA_POLICY_BIGINT_TYPE(u64) + +/* Transparent alias (for readability purposes) */ +#define NLA_POLICY_BITMAP(nbits, ...) \ + NLA_POLICY_BIGINT((nbits), ##__VA_ARGS__) + +#define nla_policy_bigint_mask(pt) ((pt)->bigint_mask) +#define nla_policy_bigint_nbits(pt) ((pt)->len) + /** * struct nl_info - netlink source information * @nlh: Netlink message header of original request @@ -1556,6 +1590,28 @@ static inline int nla_put_bitfield32(struct sk_buff *skb, int attrtype, return nla_put(skb, attrtype, sizeof(tmp), &tmp); } +/** + * nla_put_bigint - Add a bigint Netlink attribute to a socket buffer + * @skb: socket buffer to add attribute to + * @attrtype: attribute type + * @bigint: bigint to put, as array of unsigned longs + * @nbits: number of bits in the bigint + */ +static inline int nla_put_bigint(struct sk_buff *skb, int attrtype, + const unsigned long *bigint, + size_t nbits) +{ + struct nlattr *nla; + + nla = nla_reserve(skb, attrtype, bitmap_arr32_size(nbits)); + if (unlikely(!nla)) + return -EMSGSIZE; + + bitmap_to_arr32(nla_data(nla), bigint, nbits); + + return 0; +} + /** * nla_get_u32 - return payload of u32 attribute * @nla: u32 netlink attribute @@ -1749,6 +1805,134 @@ static inline struct nla_bitfield32 nla_get_bitfield32(const struct nlattr *nla) return tmp; } +/** + * nla_get_bigint - Return a bigint from u32-array bigint Netlink attribute + * @nla: %NLA_BIGINT Netlink attribute + * @bigint: target container, as array of unsigned longs + * @nbits: expected number of bits in the bigint + */ +static inline void nla_get_bigint(const struct nlattr *nla, + unsigned long *bigint, + size_t nbits) +{ + size_t diff = BITS_TO_LONGS(nbits); + + /* Core validated nla_len() is (n + 1) * sizeof(u32), leave a hint */ + nbits = clamp_t(size_t, BYTES_TO_BITS(nla_len(nla)), + BITS_PER_TYPE(u32), nbits); + bitmap_from_arr32(bigint, nla_data(nla), nbits); + + diff -= BITS_TO_LONGS(nbits); + memset(bigint + BITS_TO_LONGS(nbits), 0, diff * sizeof(long)); +} + +/* The macros below build the following set of functions, allowing to + * easily use the %NLA_BIGINT API to send scalar values. Their fake + * declarations are provided under #if 0, so that source code indexers + * could build references to them. + */ +#if 0 +int nla_put_bigint_s8(struct sk_buff *skb, int attrtype, __s8 value); +__s8 nla_get_bigint_s8(const struct nlattr *nla); +int nla_put_bigint_s16(struct sk_buff *skb, int attrtype, __s16 value); +__s16 nla_get_bigint_s16(const struct nlattr *nla); +int nla_put_bigint_s32(struct sk_buff *skb, int attrtype, __s32 value); +__s32 nla_get_bigint_s32(const struct nlattr *nla); +int nla_put_bigint_s64(struct sk_buff *skb, int attrtype, __s64 value); +__s64 nla_get_bigint_s64(const struct nlattr *nla); + +int nla_put_bigint_u8(struct sk_buff *skb, int attrtype, __u8 value); +__u8 nla_get_bigint_u8(const struct nlattr *nla); +int nla_put_bigint_u16(struct sk_buff *skb, int attrtype, __u16 value); +__u16 nla_get_bigint_u16(const struct nlattr *nla); +int nla_put_bigint_u32(struct sk_buff *skb, int attrtype, __u32 value); +__u32 nla_get_bigint_u32(const struct nlattr *nla); +int nla_put_bigint_u64(struct sk_buff *skb, int attrtype, __u64 value); +__u64 nla_get_bigint_u64(const struct nlattr *nla); + +int nla_put_bigint_be16(struct sk_buff *skb, int attrtype, __be16 value); +__be16 nla_get_bigint_be16(const struct nlattr *nla); +int nla_put_bigint_be32(struct sk_buff *skb, int attrtype, __be32 value); +__be32 nla_get_bigint_be32(const struct nlattr *nla); +int nla_put_bigint_be64(struct sk_buff *skb, int attrtype, __be64 value); +__be64 nla_get_bigint_be64(const struct nlattr *nla); + +int nla_put_bigint_le16(struct sk_buff *skb, int attrtype, __le16 value); +__le16 nla_get_bigint_le16(const struct nlattr *nla); +int nla_put_bigint_le32(struct sk_buff *skb, int attrtype, __le32 value); +__le32 nla_get_bigint_le32(const struct nlattr *nla); +int nla_put_bigint_le64(struct sk_buff *skb, int attrtype, __le64 value); +__le64 nla_get_bigint_le64(const struct nlattr *nla); + +int nla_put_bigint_net16(struct sk_buff *skb, int attrtype, __be16 value); +__be16 nla_get_bigint_net16(const struct nlattr *nla); +int nla_put_bigint_net32(struct sk_buff *skb, int attrtype, __be32 value); +__be32 nla_get_bigint_net32(const struct nlattr *nla); +int nla_put_bigint_net64(struct sk_buff *skb, int attrtype, __be64 value); +__be64 nla_get_bigint_net64(const struct nlattr *nla); +#endif + +#define NLA_BUILD_BIGINT_TYPE(type) \ +static inline int \ +nla_put_bigint_##type(struct sk_buff *skb, int attrtype, __##type value) \ +{ \ + DECLARE_BITMAP(bigint, BITS_PER_TYPE(u64)) = { \ + BITMAP_FROM_U64((__force u64)value), \ + }; \ + \ + return nla_put_bigint(skb, attrtype, bigint, \ + BITS_PER_TYPE(__##type)); \ +} \ + \ +static inline __##type \ +nla_get_bigint_##type(const struct nlattr *nla) \ +{ \ + DECLARE_BITMAP(bigint, BITS_PER_TYPE(u64)); \ + \ + nla_get_bigint(nla, bigint, BITS_PER_TYPE(__##type)); \ + \ + return (__force __##type)BITMAP_TO_U64(bigint); \ +} + +#define NLA_BUILD_BIGINT_NET(width) \ +static inline int \ +nla_put_bigint_net##width(struct sk_buff *skb, int attrtype, \ + __be##width value) \ +{ \ + return nla_put_bigint_be##width(skb, \ + attrtype | NLA_F_NET_BYTEORDER, \ + value); \ +} \ + \ +static inline __be##width \ +nla_get_bigint_net##width(const struct nlattr *nla) \ +{ \ + return nla_get_bigint_be##width(nla); \ +} + +#define NLA_BUILD_BIGINT_ORDER(order) \ + NLA_BUILD_BIGINT_TYPE(order##16); \ + NLA_BUILD_BIGINT_TYPE(order##32); \ + NLA_BUILD_BIGINT_TYPE(order##64) + +NLA_BUILD_BIGINT_TYPE(s8); +NLA_BUILD_BIGINT_TYPE(u8); + +NLA_BUILD_BIGINT_ORDER(s); +NLA_BUILD_BIGINT_ORDER(u); +NLA_BUILD_BIGINT_ORDER(be); +NLA_BUILD_BIGINT_ORDER(le); + +NLA_BUILD_BIGINT_NET(16); +NLA_BUILD_BIGINT_NET(32); +NLA_BUILD_BIGINT_NET(64); + +/* Aliases for readability */ +#define nla_put_bitmap(skb, attrtype, bitmap, nbits) \ + nla_put_bigint((skb), (attrtype), (bitmap), (nbits)) +#define nla_get_bitmap(nlattr, bitmap, nbits) \ + nla_get_bigint((nlattr), (bitmap), (nbits)) + /** * nla_memdup - duplicate attribute memory (kmemdup) * @src: netlink attribute to duplicate from @@ -1921,6 +2105,28 @@ static inline int nla_total_size_64bit(int payload) ; } +/** + * nla_total_size_bigint - get total size of Netlink attr for a number of bits + * @nbits: number of bits to store in the attribute + * + * Returns the size in bytes of a Netlink attribute needed to carry + * the specified number of bits. + */ +static inline size_t nla_total_size_bigint(size_t nbits) +{ + return nla_total_size(bitmap_arr32_size(nbits)); +} + +#define nla_total_size_bigint_type(type) \ + nla_total_size_bigint(BITS_PER_TYPE(type)) + +#define nla_total_size_bigint_u8() nla_total_size_bigint_type(u8) +#define nla_total_size_bigint_u16() nla_total_size_bigint_type(u16) +#define nla_total_size_bigint_u32() nla_total_size_bigint_type(u32) +#define nla_total_size_bigint_u64() nla_total_size_bigint_type(u64) + +#define nla_total_size_bitmap(nbits) nla_total_size_bigint(nbits) + /** * nla_for_each_attr - iterate over a stream of attributes * @pos: loop counter, set to current attribute diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h index e2ae82e3f9f7..15e599961b23 100644 --- a/include/uapi/linux/netlink.h +++ b/include/uapi/linux/netlink.h @@ -298,6 +298,8 @@ struct nla_bitfield32 { * entry has attributes again, the policy for those inner ones * and the corresponding maxtype may be specified. * @NL_ATTR_TYPE_BITFIELD32: &struct nla_bitfield32 attribute + * @NL_ATTR_TYPE_BIGINT: array of 32-bit unsigned integers which form + * one big integer or bitmap. Validated by an optional bitmask. */ enum netlink_attribute_type { NL_ATTR_TYPE_INVALID, @@ -322,6 +324,7 @@ enum netlink_attribute_type { NL_ATTR_TYPE_NESTED_ARRAY, NL_ATTR_TYPE_BITFIELD32, + NL_ATTR_TYPE_BIGINT, }; /** @@ -351,6 +354,8 @@ enum netlink_attribute_type { * bitfield32 type (U32) * @NL_POLICY_TYPE_ATTR_MASK: mask of valid bits for unsigned integers (U64) * @NL_POLICY_TYPE_ATTR_PAD: pad attribute for 64-bit alignment + * @NL_POLICY_TYPE_ATTR_BIGINT_MASK: array with mask of valid + * bits for bigints * * @__NL_POLICY_TYPE_ATTR_MAX: number of attributes * @NL_POLICY_TYPE_ATTR_MAX: highest attribute number @@ -369,6 +374,7 @@ enum netlink_policy_type_attr { NL_POLICY_TYPE_ATTR_BITFIELD32_MASK, NL_POLICY_TYPE_ATTR_PAD, NL_POLICY_TYPE_ATTR_MASK, + NL_POLICY_TYPE_ATTR_BIGINT_MASK, /* keep last */ __NL_POLICY_TYPE_ATTR_MAX, diff --git a/lib/nlattr.c b/lib/nlattr.c index 40f22b177d69..c923ee6d2876 100644 --- a/lib/nlattr.c +++ b/lib/nlattr.c @@ -81,6 +81,33 @@ static int validate_nla_bitfield32(const struct nlattr *nla, return 0; } +static int nla_validate_bigint_mask(const struct nla_policy *pt, + const struct nlattr *nla, + struct netlink_ext_ack *extack) +{ + unsigned long *bigint; + size_t nbits; + bool res; + + nbits = min_t(size_t, BYTES_TO_BITS(nla_len(nla)), + nla_policy_bigint_nbits(pt)); + + bigint = bitmap_alloc(nbits, in_task() ? GFP_KERNEL : GFP_ATOMIC); + if (!bigint) + return -ENOMEM; + + nla_get_bigint(nla, bigint, nbits); + res = bitmap_andnot(bigint, bigint, nla_policy_bigint_mask(pt), nbits); + bitmap_free(bigint); + + if (res) { + NL_SET_ERR_MSG_ATTR_POL(extack, nla, pt, "unexpected bit set"); + return -EINVAL; + } + + return 0; +} + static int nla_validate_array(const struct nlattr *head, int len, int maxtype, const struct nla_policy *policy, struct netlink_ext_ack *extack, @@ -365,6 +392,8 @@ static int nla_validate_mask(const struct nla_policy *pt, case NLA_U64: value = nla_get_u64(nla); break; + case NLA_BIGINT: + return nla_validate_bigint_mask(pt, nla, extack); default: return -EINVAL; } @@ -445,6 +474,15 @@ static int validate_nla(const struct nlattr *nla, int maxtype, goto out_err; break; + case NLA_BIGINT: + if (!bitmap_validate_arr32(nla_data(nla), nla_len(nla), + nla_policy_bigint_nbits(pt))) { + err = -EINVAL; + goto out_err; + } + + break; + case NLA_NUL_STRING: if (pt->len) minlen = min_t(int, attrlen, pt->len + 1); @@ -672,7 +710,9 @@ nla_policy_len(const struct nla_policy *p, int n) int i, len = 0; for (i = 0; i < n; i++, p++) { - if (p->len) + if (p->type == NLA_BIGINT) + len += nla_total_size_bigint(nla_policy_bigint_nbits(p)); + else if (p->len) len += nla_total_size(p->len); else if (nla_attr_len[p->type]) len += nla_total_size(nla_attr_len[p->type]); diff --git a/net/netlink/policy.c b/net/netlink/policy.c index 87e3de0fde89..79f8caeb8a77 100644 --- a/net/netlink/policy.c +++ b/net/netlink/policy.c @@ -234,6 +234,10 @@ int netlink_policy_dump_attr_size_estimate(const struct nla_policy *pt) 2 * (nla_attr_size(0) + nla_attr_size(sizeof(u64))); case NLA_BITFIELD32: return common + nla_attr_size(sizeof(u32)); + case NLA_BIGINT: + /* maximum is common, aligned validation mask as u32-arr */ + return common + + nla_total_size_bigint(nla_policy_bigint_nbits(pt)); case NLA_STRING: case NLA_NUL_STRING: case NLA_BINARY: @@ -247,6 +251,36 @@ int netlink_policy_dump_attr_size_estimate(const struct nla_policy *pt) return 0; } +static bool +__netlink_policy_dump_write_attr_bigint(struct sk_buff *skb, + const struct nla_policy *pt) +{ + if (pt->validation_type == NLA_VALIDATE_MASK) { + if (nla_put_bigint(skb, NL_POLICY_TYPE_ATTR_BIGINT_MASK, + nla_policy_bigint_mask(pt), + nla_policy_bigint_nbits(pt))) + return false; + } else { + unsigned long *mask; + int ret; + + mask = bitmap_alloc(nla_policy_bigint_nbits(pt), + in_task() ? GFP_KERNEL : GFP_ATOMIC); + if (!mask) + return false; + + bitmap_fill(mask, nla_policy_bigint_nbits(pt)); + ret = nla_put_bigint(skb, NL_POLICY_TYPE_ATTR_BIGINT_MASK, + mask, nla_policy_bigint_nbits(pt)); + bitmap_free(mask); + + if (ret) + return false; + } + + return true; +} + static int __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state, struct sk_buff *skb, @@ -346,6 +380,12 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state, pt->bitfield32_valid)) goto nla_put_failure; break; + case NLA_BIGINT: + if (!__netlink_policy_dump_write_attr_bigint(skb, pt)) + goto nla_put_failure; + + type = NL_ATTR_TYPE_BIGINT; + break; case NLA_STRING: case NLA_NUL_STRING: case NLA_BINARY: