From patchwork Thu Jul 21 15:59:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12925496 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC738CCA487 for ; Thu, 21 Jul 2022 16:02:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229601AbiGUQCQ (ORCPT ); Thu, 21 Jul 2022 12:02:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232927AbiGUQCK (ORCPT ); Thu, 21 Jul 2022 12:02:10 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D315D87C1C; Thu, 21 Jul 2022 09:02:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658419328; x=1689955328; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=YVgl0JO+cgerJOjyuUL3a2hE4LW5e+w17YvGt7vdYMQ=; b=LYV5Ge48dNkZGcUowEN7I+KQEP2Srwth56kMDjPz7pfLMSs8cDdlmMBo 8x3wodC16Loi6/ESfAbaY2d3eQZsvUhF3khjbPVc4VHeUSLJmyKzecVSE seGgpDVmfPrjn4HxSK3mTzASZURQA8/iO3Xx5DzKqL+l8bmm2HQRqFFy4 VPOhathufFjMnvFdBIxwkIa4+KOTNxNb7K02tS43PlvyFwNmx78n7NQDE kGF7NjOtFGKX93Q/Mx46++Es6Gj1hlKMc93y+98QiQM7unh+khmIFRWDi Fg01X/HnBKaPtCss8oBCGdSIXg28kFaqTIOLTbfyNwH0yD++Rd4eE87c7 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10415"; a="284644036" X-IronPort-AV: E=Sophos;i="5.93,183,1654585200"; d="scan'208";a="284644036" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2022 09:01:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,183,1654585200"; d="scan'208";a="548825381" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by orsmga003.jf.intel.com with ESMTP; 21 Jul 2022 09:01:40 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 26LG1crZ003918; Thu, 21 Jul 2022 17:01:39 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Yury Norov , Andy Shevchenko , Michal Swiatkowski , Rasmus Villemoes , Nikolay Aleksandrov , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 1/4] bitmap: add converting from/to 64-bit arrays of explicit byteorder Date: Thu, 21 Jul 2022 17:59:47 +0200 Message-Id: <20220721155950.747251-2-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220721155950.747251-1-alexandr.lobakin@intel.com> References: <20220721155950.747251-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Unlike bitmaps, which are purely host-endian and host-type, arrays of bits can have not only explicit type, but explicit Endianness as well. They can come from the userspace, network, hardware etc. Add ability to pass explicitly-byteordered arrays of u64s to bitmap_{from,to}_arr64() by extending the already existing external functions and adding a couple static inlines, just to not change the prototypes of the already existing ones. Users of the existing API which previously were being optimized to a simple copy are not affected, since the externals are being called only when byteswap is needed. Signed-off-by: Alexander Lobakin --- include/linux/bitmap.h | 58 ++++++++++++++++++++++++++---- lib/bitmap.c | 82 ++++++++++++++++++++++++++++++++---------- 2 files changed, 115 insertions(+), 25 deletions(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 035d4ac66641..95408d6e0f94 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -72,8 +72,10 @@ struct device; * bitmap_allocate_region(bitmap, pos, order) Allocate specified bit region * bitmap_from_arr32(dst, buf, nbits) Copy nbits from u32[] buf to dst * bitmap_from_arr64(dst, buf, nbits) Copy nbits from u64[] buf to dst + * bitmap_from_arr64_type(dst, buf, nbits, type) Copy nbits from {u,be,le}64[] * bitmap_to_arr32(buf, src, nbits) Copy nbits from buf to u32[] dst * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst + * bitmap_to_arr64_type(buf, src, nbits, type) Copy nbits to {u,be,le}64[] dst * bitmap_get_value8(map, start) Get 8bit value from map at start * bitmap_set_value8(map, value, start) Set 8bit value to map at start * @@ -299,22 +301,64 @@ void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, (const unsigned long *) (bitmap), (nbits)) #endif +enum { + BITMAP_ARR_U64 = 0U, +#ifdef __BIG_ENDIAN + BITMAP_ARR_BE64 = BITMAP_ARR_U64, + BITMAP_ARR_LE64, +#else + BITMAP_ARR_LE64 = BITMAP_ARR_U64, + BITMAP_ARR_BE64, +#endif + __BITMAP_ARR_TYPE_NUM, +}; + +void __bitmap_from_arr64_type(unsigned long *bitmap, const void *buf, + unsigned int nbits, u32 type); +void __bitmap_to_arr64_type(void *arr, const unsigned long *buf, + unsigned int nbits, u32 type); + /* * On 64-bit systems bitmaps are represented as u64 arrays internally. On LE32 * machines the order of hi and lo parts of numbers match the bitmap structure. * In both cases conversion is not needed when copying data from/to arrays of * u64. */ -#if (BITS_PER_LONG == 32) && defined(__BIG_ENDIAN) -void bitmap_from_arr64(unsigned long *bitmap, const u64 *buf, unsigned int nbits); -void bitmap_to_arr64(u64 *buf, const unsigned long *bitmap, unsigned int nbits); +#ifdef __BIG_ENDIAN +#define bitmap_is_arr64_native(type) \ + (__builtin_constant_p(type) && (type) == BITMAP_ARR_U64 && \ + BITS_PER_LONG == 64) #else -#define bitmap_from_arr64(bitmap, buf, nbits) \ - bitmap_copy_clear_tail((unsigned long *)(bitmap), (const unsigned long *)(buf), (nbits)) -#define bitmap_to_arr64(buf, bitmap, nbits) \ - bitmap_copy_clear_tail((unsigned long *)(buf), (const unsigned long *)(bitmap), (nbits)) +#define bitmap_is_arr64_native(type) \ + (__builtin_constant_p(type) && (type) == BITMAP_ARR_U64) #endif +static __always_inline void bitmap_from_arr64_type(unsigned long *bitmap, + const void *buf, + unsigned int nbits, + u32 type) +{ + if (bitmap_is_arr64_native(type)) + bitmap_copy_clear_tail(bitmap, buf, nbits); + else + __bitmap_from_arr64_type(bitmap, buf, nbits, type); +} + +static __always_inline void bitmap_to_arr64_type(void *buf, + const unsigned long *bitmap, + unsigned int nbits, u32 type) +{ + if (bitmap_is_arr64_native(type)) + bitmap_copy_clear_tail(buf, bitmap, nbits); + else + __bitmap_to_arr64_type(buf, bitmap, nbits, type); +} + +#define bitmap_from_arr64(bitmap, buf, nbits) \ + bitmap_from_arr64_type((bitmap), (buf), (nbits), BITMAP_ARR_U64) +#define bitmap_to_arr64(buf, bitmap, nbits) \ + bitmap_to_arr64_type((buf), (bitmap), (nbits), BITMAP_ARR_U64) + static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, const unsigned long *src2, unsigned int nbits) { diff --git a/lib/bitmap.c b/lib/bitmap.c index 2b67cd657692..e660077f2099 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -1513,23 +1513,46 @@ void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, unsigned int nbits) EXPORT_SYMBOL(bitmap_to_arr32); #endif -#if (BITS_PER_LONG == 32) && defined(__BIG_ENDIAN) /** - * bitmap_from_arr64 - copy the contents of u64 array of bits to bitmap + * __bitmap_from_arr64_type - copy the contents of u64 array of bits to bitmap * @bitmap: array of unsigned longs, the destination bitmap - * @buf: array of u64 (in host byte order), the source bitmap + * @buf: array of u64/__be64/__le64, the source bitmap * @nbits: number of bits in @bitmap + * @type: type of the array (%BITMAP_ARR_*64) */ -void bitmap_from_arr64(unsigned long *bitmap, const u64 *buf, unsigned int nbits) +void __bitmap_from_arr64_type(unsigned long *bitmap, const void *buf, + unsigned int nbits, u32 type) { + const union { + __be64 be; + __le64 le; + u64 u; + } *src = buf; int n; for (n = nbits; n > 0; n -= 64) { - u64 val = *buf++; + u64 val; + + switch (type) { +#ifdef __LITTLE_ENDIAN + case BITMAP_ARR_BE64: + val = be64_to_cpu((src++)->be); + break; +#else + case BITMAP_ARR_LE64: + val = le64_to_cpu((src++)->le); + break; +#endif + default: + val = (src++)->u; + break; + } *bitmap++ = val; +#if BITS_PER_LONG == 32 if (n > 32) *bitmap++ = val >> 32; +#endif } /* @@ -1542,28 +1565,51 @@ void bitmap_from_arr64(unsigned long *bitmap, const u64 *buf, unsigned int nbits if (nbits % BITS_PER_LONG) bitmap[-1] &= BITMAP_LAST_WORD_MASK(nbits); } -EXPORT_SYMBOL(bitmap_from_arr64); +EXPORT_SYMBOL(__bitmap_from_arr64_type); /** - * bitmap_to_arr64 - copy the contents of bitmap to a u64 array of bits - * @buf: array of u64 (in host byte order), the dest bitmap + * __bitmap_to_arr64_type - copy the contents of bitmap to a u64 array of bits + * @buf: array of u64/__be64/__le64, the dest bitmap * @bitmap: array of unsigned longs, the source bitmap * @nbits: number of bits in @bitmap + * @type: type of the array (%BITMAP_ARR_*64) */ -void bitmap_to_arr64(u64 *buf, const unsigned long *bitmap, unsigned int nbits) +void __bitmap_to_arr64_type(void *buf, const unsigned long *bitmap, + unsigned int nbits, u32 type) { const unsigned long *end = bitmap + BITS_TO_LONGS(nbits); + union { + __be64 be; + __le64 le; + u64 u; + } *dst = buf; while (bitmap < end) { - *buf = *bitmap++; + u64 val = *bitmap++; + +#if BITS_PER_LONG == 32 if (bitmap < end) - *buf |= (u64)(*bitmap++) << 32; - buf++; - } + val |= (u64)(*bitmap++) << 32; +#endif - /* Clear tail bits in the last element of array beyond nbits. */ - if (nbits % 64) - buf[-1] &= GENMASK_ULL((nbits - 1) % 64, 0); -} -EXPORT_SYMBOL(bitmap_to_arr64); + /* Clear tail bits in the last element of array beyond nbits. */ + if (bitmap == end && (nbits % 64)) + val &= GENMASK_ULL((nbits - 1) % 64, 0); + + switch (type) { +#ifdef __LITTLE_ENDIAN + case BITMAP_ARR_BE64: + (dst++)->be = cpu_to_be64(val); + break; +#else + case BITMAP_ARR_LE64: + (dst++)->le = cpu_to_le64(val); + break; #endif + default: + (dst++)->u = val; + break; + } + } +} +EXPORT_SYMBOL(__bitmap_to_arr64_type); From patchwork Thu Jul 21 15:59:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12925495 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E76B6C433EF for ; Thu, 21 Jul 2022 16:02:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229873AbiGUQCO (ORCPT ); Thu, 21 Jul 2022 12:02:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232106AbiGUQCI (ORCPT ); Thu, 21 Jul 2022 12:02:08 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9959D87C19; Thu, 21 Jul 2022 09:02:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658419327; x=1689955327; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TLgsJjRByhDdYcQIOFE4TYOytMD2wruMCKk+/mfVKmU=; b=K8vGNM2IHVmcm5SpnWhk/+VM19W6mpRb+OTcruR6l8u97AeUX42ItWyr 3AL2mGxxTB9MuwxgDf/gmOwQDPrtVBzl626RFjHHwzXPt/xzCERRGuZOh 30vXyQEHiyuobN6tXOnw5o6U57RSVXXr/Zk28Q1uIJ6PI3dIse7vyWI3r LklHnNKuSiCJ87d5cow3nXVhq8zWRiib7QUbt72wtEi3u+Nqi2G7c0+ao P7pYvCnHH9wF8Lo4r3JKWRov4gcFdn+m1j+TldbBfUCayEp7a5J1GodSp q/MyBHhXjVudFVXz1ocU7pvmkMNnxKFmSkdl84KooWsZRofPgGPeofYkb A==; X-IronPort-AV: E=McAfee;i="6400,9594,10415"; a="288255781" X-IronPort-AV: E=Sophos;i="5.93,183,1654585200"; d="scan'208";a="288255781" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2022 09:01:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,183,1654585200"; d="scan'208";a="666337450" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by fmsmga004.fm.intel.com with ESMTP; 21 Jul 2022 09:01:41 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 26LG1cra003918; Thu, 21 Jul 2022 17:01:40 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Yury Norov , Andy Shevchenko , Michal Swiatkowski , Rasmus Villemoes , Nikolay Aleksandrov , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 2/4] bitmap: add a couple more helpers to work with arrays of u64s Date: Thu, 21 Jul 2022 17:59:48 +0200 Message-Id: <20220721155950.747251-3-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220721155950.747251-1-alexandr.lobakin@intel.com> References: <20220721155950.747251-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add two new functions to work on arr64s: * bitmap_arr64_size() - takes number of bits to be stored in arr64 and returns number of bytes required to store such arr64, can be useful when allocating memory for arr64 containers; * bitmap_validate_arr64{,_type}() - takes pointer to an arr64 and its size in bytes, plus expected number of bits and array Endianness. Ensures that the size is valid (must be a multiply of `sizeof(u64)`) and no bits past the number is set (for the specified byteorder). Signed-off-by: Alexander Lobakin --- include/linux/bitmap.h | 22 ++++++++++++++- lib/bitmap.c | 63 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+), 1 deletion(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 95408d6e0f94..14add46e06e4 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -7,7 +7,8 @@ #include #include #include -#include +#include +#include #include #include @@ -76,6 +77,9 @@ struct device; * bitmap_to_arr32(buf, src, nbits) Copy nbits from buf to u32[] dst * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst * bitmap_to_arr64_type(buf, src, nbits, type) Copy nbits to {u,be,le}64[] dst + * bitmap_validate_arr64_type(buf, len, nbits, type) Validate {u,be,le}64[] + * bitmap_validate_arr64(buf, len, nbits) Validate u64[] buf of len bytes + * bitmap_arr64_size(nbits) Get size of u64[] arr for nbits * bitmap_get_value8(map, start) Get 8bit value from map at start * bitmap_set_value8(map, value, start) Set 8bit value to map at start * @@ -317,6 +321,8 @@ void __bitmap_from_arr64_type(unsigned long *bitmap, const void *buf, unsigned int nbits, u32 type); void __bitmap_to_arr64_type(void *arr, const unsigned long *buf, unsigned int nbits, u32 type); +int bitmap_validate_arr64_type(const void *buf, size_t len, size_t nbits, + u32 type); /* * On 64-bit systems bitmaps are represented as u64 arrays internally. On LE32 @@ -358,6 +364,20 @@ static __always_inline void bitmap_to_arr64_type(void *buf, bitmap_from_arr64_type((bitmap), (buf), (nbits), BITMAP_ARR_U64) #define bitmap_to_arr64(buf, bitmap, nbits) \ bitmap_to_arr64_type((buf), (bitmap), (nbits), BITMAP_ARR_U64) +#define bitmap_validate_arr64(buf, len, nbits) \ + bitmap_validate_arr64_type((buf), (len), (nbits), BITMAP_ARR_U64) + +/** + * bitmap_arr64_size - determine the size of array of u64s for a number of bits + * @nbits: number of bits to store in the array + * + * Returns the size in bytes of a u64s-array needed to carry the specified + * number of bits. + */ +static inline size_t bitmap_arr64_size(size_t nbits) +{ + return array_size(BITS_TO_U64(nbits), sizeof(u64)); +} static inline bool bitmap_and(unsigned long *dst, const unsigned long *src1, const unsigned long *src2, unsigned int nbits) diff --git a/lib/bitmap.c b/lib/bitmap.c index e660077f2099..5ad6f18f27dc 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -1613,3 +1613,66 @@ void __bitmap_to_arr64_type(void *buf, const unsigned long *bitmap, } } EXPORT_SYMBOL(__bitmap_to_arr64_type); + +/** + * bitmap_validate_arr64_type - perform validation of a u64-array bitmap + * @buf: array of u64/__be64/__le64, the dest bitmap + * @len: length of the array, in bytes + * @nbits: expected/supported number of bits in the bitmap + * @type: expected array type (%BITMAP_*64) + * + * Returns 0 if the array passed the checks (see below), -%EINVAL otherwise. + */ +int bitmap_validate_arr64_type(const void *buf, size_t len, size_t nbits, + u32 type) +{ + size_t word = (nbits - 1) / BITS_PER_TYPE(u64); + u32 pos = (nbits - 1) % BITS_PER_TYPE(u64); + const union { + __be64 be; + __le64 le; + u64 u; + } *arr = buf; + u64 last; + + /* Must consist of 1...n full u64s */ + if (!len || len % sizeof(u64)) + return -EINVAL; + + /* + * If the array is shorter than expected, assume we support + * all of the bits set there + */ + if (word >= len / sizeof(u64)) + return 0; + + switch (type) { +#ifdef __LITTLE_ENDIAN + case BITMAP_ARR_BE64: + last = be64_to_cpu(arr[word].be); + break; +#else + case BITMAP_ARR_LE64: + last = le64_to_cpu(arr[word].le); + break; +#endif + default: + last = arr[word].u; + break; + } + + /* Last word must not contain any bits past the expected number */ + if (last & ~GENMASK_ULL(pos, 0)) + return -EINVAL; + + /* + * If the array is longer than expected, make sure all the bytes + * past the expected length are zeroed + */ + len -= bitmap_arr64_size(nbits); + if (len && memchr_inv(&arr[word + 1], 0, len)) + return -EINVAL; + + return 0; +} +EXPORT_SYMBOL(bitmap_validate_arr64_type); From patchwork Thu Jul 21 15:59:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12925492 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB8C4CCA487 for ; Thu, 21 Jul 2022 16:01:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230245AbiGUQBr (ORCPT ); Thu, 21 Jul 2022 12:01:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229510AbiGUQBq (ORCPT ); Thu, 21 Jul 2022 12:01:46 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90F1758876; Thu, 21 Jul 2022 09:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658419305; x=1689955305; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IxIOJp2Qbj+WW3trb3/dQrpMihcnTdPMAtB9cP4gT5k=; b=m19b/MDRaPbVE+V5aAdWfPMeDgKtw3XCNebbXTwyoHDLVphZTzekl8jP 75fKkGGLE6myER+53DjpO7fNUJIZSeCgXnpk2noQ90vzw+RwE/23WXB+m O9r97oTqdsdyq/R6hQcftd7WYlwgdpxvN+oytCGySw6hqm/Gs218zeepg 5wisXag0wYbopFNSz3aL9bIN9FbuXdppZ+Y+0/56dxSqmC1DD8Wh+SkQx vFxu8zZkR52/mETdI8gQ0MIxnQ31mU2qJm6dD3/H+aN1H09SLsPnu120q zYtM4HO4L9DUmoslL9IEKRKHtSix4VOQwWjT5iVRWsyGZkzYRbLSN8qQ8 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10415"; a="373389478" X-IronPort-AV: E=Sophos;i="5.93,183,1654585200"; d="scan'208";a="373389478" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2022 09:01:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,183,1654585200"; d="scan'208";a="598512084" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by orsmga002.jf.intel.com with ESMTP; 21 Jul 2022 09:01:42 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 26LG1crb003918; Thu, 21 Jul 2022 17:01:41 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Yury Norov , Andy Shevchenko , Michal Swiatkowski , Rasmus Villemoes , Nikolay Aleksandrov , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 3/4] lib/test_bitmap: cover explicitly byteordered arr64s Date: Thu, 21 Jul 2022 17:59:49 +0200 Message-Id: <20220721155950.747251-4-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220721155950.747251-1-alexandr.lobakin@intel.com> References: <20220721155950.747251-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org When testing converting bitmaps <-> arr64, test Big and Little Endianned variants as well to make sure it works as expected on all platforms. Also, use more complex bitmap_validate_arr64_type() instead of just checking the tail. It will handle different Endiannesses correctly (note we don't pass `sizeof(arr)` to it as we poison it with 0xa5). Signed-off-by: Alexander Lobakin --- lib/test_bitmap.c | 22 ++++++++++++++-------- 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index 98754ff9fe68..8a44290b60ba 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -585,7 +585,7 @@ static void __init test_bitmap_arr32(void) } } -static void __init test_bitmap_arr64(void) +static void __init test_bitmap_arr64_type(u32 type) { unsigned int nbits, next_bit; u64 arr[EXP1_IN_BITS / 64]; @@ -594,9 +594,11 @@ static void __init test_bitmap_arr64(void) memset(arr, 0xa5, sizeof(arr)); for (nbits = 0; nbits < EXP1_IN_BITS; ++nbits) { + int res; + memset(bmap2, 0xff, sizeof(arr)); - bitmap_to_arr64(arr, exp1, nbits); - bitmap_from_arr64(bmap2, arr, nbits); + bitmap_to_arr64_type(arr, exp1, nbits, type); + bitmap_from_arr64_type(bmap2, arr, nbits, type); expect_eq_bitmap(bmap2, exp1, nbits); next_bit = find_next_bit(bmap2, round_up(nbits, BITS_PER_LONG), nbits); @@ -604,17 +606,21 @@ static void __init test_bitmap_arr64(void) pr_err("bitmap_copy_arr64(nbits == %d:" " tail is not safely cleared: %d\n", nbits, next_bit); - if ((nbits % 64) && - (arr[(nbits - 1) / 64] & ~GENMASK_ULL((nbits - 1) % 64, 0))) - pr_err("bitmap_to_arr64(nbits == %d): tail is not safely cleared: 0x%016llx (must be 0x%016llx)\n", - nbits, arr[(nbits - 1) / 64], - GENMASK_ULL((nbits - 1) % 64, 0)); + res = bitmap_validate_arr64_type(arr, bitmap_arr64_size(nbits), + nbits, type); + expect_eq_uint(nbits ? 0 : -EINVAL, res); if (nbits < EXP1_IN_BITS - 64) expect_eq_uint(arr[DIV_ROUND_UP(nbits, 64)], 0xa5a5a5a5); } } +static void __init test_bitmap_arr64(void) +{ + for (u32 type = 0; type < __BITMAP_ARR_TYPE_NUM; type++) + test_bitmap_arr64_type(type); +} + static void noinline __init test_mem_optimisations(void) { DECLARE_BITMAP(bmap1, 1024); From patchwork Thu Jul 21 15:59:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12925494 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEB34C433EF for ; Thu, 21 Jul 2022 16:02:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230232AbiGUQCJ (ORCPT ); Thu, 21 Jul 2022 12:02:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229729AbiGUQCG (ORCPT ); Thu, 21 Jul 2022 12:02:06 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50BD987C1C; Thu, 21 Jul 2022 09:02:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658419324; x=1689955324; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+ZXx8PpQH9ORcZ620YbfxkO6vGclgqajHlTMOeal9nE=; b=NHjzRfDm4pIkhyhoppIYIFkchI0q8ABBEJzh2LLci8NsZ5RQuUKsNvmu AhWB3Tjk0+gQ3hkawJAsSwTWnjsjJ8DVQaHbKHmNJSahEEhGTzQ+ABouT cHhn3khAukjq8GMRA7zpNtSfif+0wvup1ZMwcPoPxL3q/jUkrnKK0fEsN k4WTLV4aZxvqxTFXL7tXQ6HCGPs5dtwVIlJJXbNWFnQiR6OAKw8LQuGTS zB8AHARE/2wezTdxB9ZIfl86cc7DmNF0BwjBipbVQ7jVjxyCv1V5EfAfp Hmga5oczsYBwo2JhqCwbysClmC9Jj8TcPobTdKZBxJLzltZ7LgB53a83M g==; X-IronPort-AV: E=McAfee;i="6400,9594,10415"; a="348787166" X-IronPort-AV: E=Sophos;i="5.93,183,1654585200"; d="scan'208";a="348787166" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2022 09:01:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,183,1654585200"; d="scan'208";a="740727604" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by fmsmga001.fm.intel.com with ESMTP; 21 Jul 2022 09:01:43 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 26LG1crc003918; Thu, 21 Jul 2022 17:01:42 +0100 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Yury Norov , Andy Shevchenko , Michal Swiatkowski , Rasmus Villemoes , Nikolay Aleksandrov , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next 4/4] netlink: add 'bitmap' attribute type (%NL_ATTR_TYPE_BITMAP / %NLA_BITMAP) Date: Thu, 21 Jul 2022 17:59:50 +0200 Message-Id: <20220721155950.747251-5-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220721155950.747251-1-alexandr.lobakin@intel.com> References: <20220721155950.747251-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Add a new type of Netlink attribute -- bitmap. Internally, bitmaps are represented as arrays of unsigned longs. This provides optimal performance and memory usage; however, bitness dependent types can't be used to communicate between kernel and userspace -- for example, userapp can be 32-bit on a 64-bit system. So, to provide reliable communication data type, 64-bit arrays are used. Netlink core takes care of converting them from/to unsigned longs when sending or receiving Netlink messages; although, on LE and 64-bit systems conversion is a no-op. They also can have explicit byteorder -- core code also handles this (both kernel and userspace must know in advance the byteorder of a particular attribute), as well as cases when the userspace and the kernel assume different number of bits (-> different number of u64s) for an attribute. Basic consumer functions/macros are: * nla_put_bitmap and nla_get_bitmap families -- to easily put a bitmap to an skb or get it from a received message (only pointer to an unsigned long bitmap and the number of bits in it are needed), with optional explicit byteorder; * nla_total_size_bitmap() -- to provide estimate size in bytes to Netlink needed to store a bitmap; * {,__}NLA_POLICY_BITMAP() -- to declare a Netlink policy for a bitmap attribute. Netlink policy for a bitmap can have an optional bitmap mask of bits supported by the code -- for example, to filter out obsolete bits removed some time ago. Without it, Netlink will make sure no bits past the passed number are set. Both variants can be requested from the userspace and the kernel will put a mask into a new policy attribute (%NL_POLICY_TYPE_ATTR_BITMAP_MASK). Signed-off-by: Alexander Lobakin --- include/net/netlink.h | 159 ++++++++++++++++++++++++++++++++++- include/uapi/linux/netlink.h | 5 ++ lib/nlattr.c | 43 +++++++++- net/netlink/policy.c | 44 ++++++++++ 4 files changed, 249 insertions(+), 2 deletions(-) diff --git a/include/net/netlink.h b/include/net/netlink.h index 7a2a9d3144ba..87fcb8d0cbe8 100644 --- a/include/net/netlink.h +++ b/include/net/netlink.h @@ -2,7 +2,7 @@ #ifndef __NET_NETLINK_H #define __NET_NETLINK_H -#include +#include #include #include #include @@ -180,6 +180,7 @@ enum { NLA_S32, NLA_S64, NLA_BITFIELD32, + NLA_BITMAP, NLA_REJECT, __NLA_TYPE_MAX, }; @@ -235,12 +236,16 @@ enum nla_policy_validation { * given type fits, using it verifies minimum length * just like "All other" * NLA_BITFIELD32 Unused + * NLA_BITMAP Number of bits in the bitmap * NLA_REJECT Unused * All other Minimum length of attribute payload * * Meaning of validation union: * NLA_BITFIELD32 This is a 32-bit bitmap/bitselector attribute and * `bitfield32_valid' is the u32 value of valid flags + * NLA_BITMAP `bitmap_mask` is a pointer to the mask of the valid + * bits of the given bitmap to perform the validation, + * its lowest 2 bits specify its type (u64/be64/le64). * NLA_REJECT This attribute is always rejected and `reject_message' * may point to a string to report as the error instead * of the generic one in extended ACK. @@ -326,6 +331,7 @@ struct nla_policy { struct { s16 min, max; }; + const unsigned long *bitmap_mask; int (*validate)(const struct nlattr *attr, struct netlink_ext_ack *extack); /* This entry is special, and used for the attribute at index 0 @@ -442,6 +448,47 @@ struct nla_policy { } #define NLA_POLICY_MIN_LEN(_len) NLA_POLICY_MIN(NLA_BINARY, _len) +/* `unsigned long` has alignment of 4 or 8 bytes, so [1:0] are always zero. + * We put bitmap type (%BITMAP_ARR_*64) there to not inflate &nla_policy + * (one new `u32` field adds 10 Kb to kernel data). Bitmap type is 0 (native) + * in most cases, which means no pointer modifications. + * The variable arguments can take only one optional argument: pointer to + * the bitmap mask used for validation. If it's not present, ::bitmap_mask + * carries only bitmap type. + * The first cast here ensures that the passed mask bitmap is compatible with + * `const unsigned long *`, the second -- that @_type is scalar. + */ +#define __NLA_POLICY_BITMAP_MASK(_type, ...) \ + ((typeof((__VA_ARGS__ + 0) ? : NULL)) \ + ((typeof((_type) + 0UL))(__VA_ARGS__ + 0) + (_type))) + +static_assert(__BITMAP_ARR_TYPE_NUM <= __alignof__(long)); + +/** + * __NLA_POLICY_BITMAP - represent &nla_policy for a bitmap attribute + * @_nbits - number of bits in the bitmap + * @_type - type of the an arr64 used for communication (%BITMAP_ARR_*64) + * @... - optional pointer to a bitmap carrying mask of supported bits + */ +#define __NLA_POLICY_BITMAP(_nbits, _type, ...) { \ + .type = NLA_BITMAP, \ + .len = (_nbits), \ + .bitmap_mask = __NLA_POLICY_BITMAP_MASK((_type), ##__VA_ARGS__), \ + .validation_type = (__VA_ARGS__ + 0) ? NLA_VALIDATE_MASK : 0, \ +} + +#define NLA_POLICY_BITMAP(nbits, ...) \ + __NLA_POLICY_BITMAP((nbits), BITMAP_ARR_U64, ##__VA_ARGS__) + +#define nla_policy_bitmap_mask(pt) \ + ((typeof((pt)->bitmap_mask)) \ + ((size_t)(pt)->bitmap_mask & ~(__alignof__(long) - 1))) + +#define nla_policy_bitmap_type(pt) \ + ((u32)((size_t)(pt)->bitmap_mask & (__alignof__(long) - 1))) + +#define nla_policy_bitmap_nbits(pt) ((pt)->len) + /** * struct nl_info - netlink source information * @nlh: Netlink message header of original request @@ -1545,6 +1592,63 @@ static inline int nla_put_bitfield32(struct sk_buff *skb, int attrtype, return nla_put(skb, attrtype, sizeof(tmp), &tmp); } +/** + * __nla_put_bitmap - Add a bitmap netlink attribute to a socket buffer + * @skb: socket buffer to add attribute to + * @attrtype: attribute type + * @bitmap: bitmap to put + * @nbits: number of bits in the bitmap + * @type: type of the u64-array bitmap to put (%BITMAP_ARR_*64) + * @padattr: attribute type for the padding + */ +static inline int __nla_put_bitmap(struct sk_buff *skb, int attrtype, + const unsigned long *bitmap, + size_t nbits, u32 type, int padattr) +{ + struct nlattr *nla; + + nla = nla_reserve_64bit(skb, attrtype, bitmap_arr64_size(nbits), + padattr); + if (unlikely(!nla)) + return -EMSGSIZE; + + bitmap_to_arr64_type(nla_data(nla), bitmap, nbits, type); + + return 0; +} + +static inline int nla_put_bitmap(struct sk_buff *skb, int attrtype, + const unsigned long *bitmap, size_t nbits, + int padattr) +{ + return __nla_put_bitmap(skb, attrtype, bitmap, nbits, BITMAP_ARR_U64, + padattr); +} + +static inline int nla_put_bitmap_be(struct sk_buff *skb, int attrtype, + const unsigned long *bitmap, size_t nbits, + int padattr) +{ + return __nla_put_bitmap(skb, attrtype, bitmap, nbits, BITMAP_ARR_BE64, + padattr); +} + +static inline int nla_put_bitmap_le(struct sk_buff *skb, int attrtype, + const unsigned long *bitmap, size_t nbits, + int padattr) +{ + return __nla_put_bitmap(skb, attrtype, bitmap, nbits, BITMAP_ARR_LE64, + padattr); +} + +static inline int nla_put_bitmap_net(struct sk_buff *skb, int attrtype, + const unsigned long *bitmap, size_t nbits, + int padattr) +{ + return nla_put_bitmap_be(skb, attrtype | NLA_F_NET_BYTEORDER, bitmap, + nbits, padattr); +} + /** * nla_get_u32 - return payload of u32 attribute * @nla: u32 netlink attribute @@ -1738,6 +1842,47 @@ static inline struct nla_bitfield32 nla_get_bitfield32(const struct nlattr *nla) return tmp; } +/** + * __nla_get_bitmap - Return a bitmap from u64-array bitmap Netlink attribute + * @nla: %NLA_BITMAP Netlink attribute + * @bitmap: target container + * @nbits: expected number of bits in the bitmap + * @type: expected type of the attribute (%BITMAP_ARR_*64) + */ +static inline void __nla_get_bitmap(const struct nlattr *nla, + unsigned long *bitmap, + size_t nbits, u32 type) +{ + size_t diff = ALIGN(nbits, BITS_PER_LONG); + + nbits = min_t(typeof(nbits), nbits, nla_len(nla) * BITS_PER_BYTE); + bitmap_from_arr64_type(bitmap, nla_data(nla), nbits, type); + + diff -= ALIGN(nbits, BITS_PER_LONG); + if (diff) + bitmap_clear(bitmap, ALIGN(nbits, BITS_PER_LONG), diff); +} + +static inline void nla_get_bitmap(const struct nlattr *nla, + unsigned long *bitmap, size_t nbits) +{ + return __nla_get_bitmap(nla, bitmap, nbits, BITMAP_ARR_U64); +} + +static inline void nla_get_bitmap_be(const struct nlattr *nla, + unsigned long *bitmap, size_t nbits) +{ + return __nla_get_bitmap(nla, bitmap, nbits, BITMAP_ARR_BE64); +} + +static inline void nla_get_bitmap_le(const struct nlattr *nla, + unsigned long *bitmap, size_t nbits) +{ + return __nla_get_bitmap(nla, bitmap, nbits, BITMAP_ARR_LE64); +} + +#define nla_get_bitmap_net nla_get_bitmap_be + /** * nla_memdup - duplicate attribute memory (kmemdup) * @src: netlink attribute to duplicate from @@ -1910,6 +2055,18 @@ static inline int nla_total_size_64bit(int payload) ; } +/** + * nla_total_size_bitmap - get total size of Netlink attr for a number of bits + * @nbits: number of bits to store in the attribute + * + * Returns the size in bytes of a Netlink attribute needed to carry + * the specified number of bits. + */ +static inline size_t nla_total_size_bitmap(size_t nbits) +{ + return nla_total_size_64bit(bitmap_arr64_size(nbits)); +} + /** * nla_for_each_attr - iterate over a stream of attributes * @pos: loop counter, set to current attribute diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h index 855dffb4c1c3..cb55d3ce810b 100644 --- a/include/uapi/linux/netlink.h +++ b/include/uapi/linux/netlink.h @@ -284,6 +284,8 @@ struct nla_bitfield32 { * entry has attributes again, the policy for those inner ones * and the corresponding maxtype may be specified. * @NL_ATTR_TYPE_BITFIELD32: &struct nla_bitfield32 attribute + * @NL_ATTR_TYPE_BITMAP: array of 64-bit unsigned values (__{u,be,le}64) + * which form one big bitmap. Validated by an optional bitmask. */ enum netlink_attribute_type { NL_ATTR_TYPE_INVALID, @@ -308,6 +310,7 @@ enum netlink_attribute_type { NL_ATTR_TYPE_NESTED_ARRAY, NL_ATTR_TYPE_BITFIELD32, + NL_ATTR_TYPE_BITMAP, }; /** @@ -337,6 +340,7 @@ enum netlink_attribute_type { * bitfield32 type (U32) * @NL_POLICY_TYPE_ATTR_MASK: mask of valid bits for unsigned integers (U64) * @NL_POLICY_TYPE_ATTR_PAD: pad attribute for 64-bit alignment + * @NL_POLICY_TYPE_ATTR_BITMAP_MASK: mask of valid bits for bitmaps */ enum netlink_policy_type_attr { NL_POLICY_TYPE_ATTR_UNSPEC, @@ -352,6 +356,7 @@ enum netlink_policy_type_attr { NL_POLICY_TYPE_ATTR_BITFIELD32_MASK, NL_POLICY_TYPE_ATTR_PAD, NL_POLICY_TYPE_ATTR_MASK, + NL_POLICY_TYPE_ATTR_BITMAP_MASK, /* keep last */ __NL_POLICY_TYPE_ATTR_MAX, diff --git a/lib/nlattr.c b/lib/nlattr.c index 86029ad5ead4..ebff927cfe3a 100644 --- a/lib/nlattr.c +++ b/lib/nlattr.c @@ -81,6 +81,33 @@ static int validate_nla_bitfield32(const struct nlattr *nla, return 0; } +static int nla_validate_bitmap_mask(const struct nla_policy *pt, + const struct nlattr *nla, + struct netlink_ext_ack *extack) +{ + unsigned long *bitmap; + size_t nbits; + bool res; + + nbits = min_t(typeof(nbits), nla_len(nla) * BITS_PER_BYTE, + nla_policy_bitmap_nbits(pt)); + + bitmap = bitmap_alloc(nbits, in_task() ? GFP_KERNEL : GFP_ATOMIC); + if (!bitmap) + return -ENOMEM; + + __nla_get_bitmap(nla, bitmap, nbits, nla_policy_bitmap_type(pt)); + res = bitmap_andnot(bitmap, bitmap, nla_policy_bitmap_mask(pt), nbits); + kfree(bitmap); + + if (res) { + NL_SET_ERR_MSG_ATTR_POL(extack, nla, pt, "unexpected bit set"); + return -EINVAL; + } + + return 0; +} + static int nla_validate_array(const struct nlattr *head, int len, int maxtype, const struct nla_policy *policy, struct netlink_ext_ack *extack, @@ -342,6 +369,8 @@ static int nla_validate_mask(const struct nla_policy *pt, case NLA_U64: value = nla_get_u64(nla); break; + case NLA_BITMAP: + return nla_validate_bitmap_mask(pt, nla, extack); default: return -EINVAL; } @@ -422,6 +451,15 @@ static int validate_nla(const struct nlattr *nla, int maxtype, goto out_err; break; + case NLA_BITMAP: + err = bitmap_validate_arr64_type(nla_data(nla), nla_len(nla), + nla_policy_bitmap_nbits(pt), + nla_policy_bitmap_type(pt)); + if (err) + goto out_err; + + break; + case NLA_NUL_STRING: if (pt->len) minlen = min_t(int, attrlen, pt->len + 1); @@ -649,7 +687,10 @@ nla_policy_len(const struct nla_policy *p, int n) int i, len = 0; for (i = 0; i < n; i++, p++) { - if (p->len) + if (p->type == NLA_BITMAP) + len += + nla_total_size_bitmap(nla_policy_bitmap_nbits(p)); + else if (p->len) len += nla_total_size(p->len); else if (nla_attr_len[p->type]) len += nla_total_size(nla_attr_len[p->type]); diff --git a/net/netlink/policy.c b/net/netlink/policy.c index 8d7c900e27f4..8a5a86fb1549 100644 --- a/net/netlink/policy.c +++ b/net/netlink/policy.c @@ -224,6 +224,10 @@ int netlink_policy_dump_attr_size_estimate(const struct nla_policy *pt) 2 * (nla_attr_size(0) + nla_attr_size(sizeof(u64))); case NLA_BITFIELD32: return common + nla_attr_size(sizeof(u32)); + case NLA_BITMAP: + /* maximum is common, aligned validation mask as u64 bitmap */ + return common + + nla_total_size_bitmap(nla_policy_bitmap_nbits(pt)); case NLA_STRING: case NLA_NUL_STRING: case NLA_BINARY: @@ -237,6 +241,40 @@ int netlink_policy_dump_attr_size_estimate(const struct nla_policy *pt) return 0; } +static bool +__netlink_policy_dump_write_attr_bitmap(struct sk_buff *skb, + const struct nla_policy *pt) +{ + if (pt->validation_type == NLA_VALIDATE_MASK) { + if (__nla_put_bitmap(skb, NL_POLICY_TYPE_ATTR_BITMAP_MASK, + nla_policy_bitmap_mask(pt), + nla_policy_bitmap_nbits(pt), + nla_policy_bitmap_type(pt), + NL_POLICY_TYPE_ATTR_PAD)) + return false; + } else { + unsigned long *mask; + int ret; + + mask = bitmap_zalloc(nla_policy_bitmap_nbits(pt), + in_task() ? GFP_KERNEL : GFP_ATOMIC); + if (!mask) + return false; + + bitmap_set(mask, 0, nla_policy_bitmap_nbits(pt)); + ret = __nla_put_bitmap(skb, NL_POLICY_TYPE_ATTR_BITMAP_MASK, + mask, nla_policy_bitmap_nbits(pt), + nla_policy_bitmap_type(pt), + NL_POLICY_TYPE_ATTR_PAD); + kfree(mask); + + if (ret) + return false; + } + + return true; +} + static int __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state, struct sk_buff *skb, @@ -336,6 +374,12 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state, pt->bitfield32_valid)) goto nla_put_failure; break; + case NLA_BITMAP: + if (!__netlink_policy_dump_write_attr_bitmap(skb, pt)) + goto nla_put_failure; + + type = NL_ATTR_TYPE_BITMAP; + break; case NLA_STRING: case NLA_NUL_STRING: case NLA_BINARY: