From patchwork Tue Nov 19 00:23:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879230 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73A9917BA3; Tue, 19 Nov 2024 00:23:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975829; cv=none; b=cGGsg7gn89sOl/P0CzvJZzl/Hd7oSMDSi8I1rfitK0vcR5BCVyPAZtEDztgaExgIhZoo0TbS2e6K8EK7BLaUkvCnewyJ9fis7wTp97VfyYneozq7GDh8NBQpBCuCNnTm2eVAHg8CKi4RCk4RB/jydnoS90uJxdqVDQgKBENB7Us= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975829; c=relaxed/simple; bh=W9Y0MqjXuPLa6EwQNVLtTEK4szUojyFWQZrhcx4lVXA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=J1t746K65CTXQbanvrXRGgZ/jRc3l0HRtD9Jtruf3dWDaNpVR3uyN/b/jlA+pJCtTw9PpkxLJtLIfW/CDPEW481llNDT0ZxU9Jyg12QskmDSSylecC+yZFdKbXRfESznBG9/lz2o9zIXVl/eeETnqGzWZPoFFtlCGEE+5/eKr94= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=F0VZOh8V; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="F0VZOh8V" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975827; x=1763511827; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=W9Y0MqjXuPLa6EwQNVLtTEK4szUojyFWQZrhcx4lVXA=; b=F0VZOh8VresKrILmQDhrkbkPs2vhUK/Fs3JQNqvIY+5TvGgmyuuwCyQ4 qrijDaX3O+nUlBO7UUXh/ZpBcGMEthFNXjgSb+zF1aT53G16uaghfJeNh tv+qeYnjFzsC/GK5xR1qDvRwIUFGY2pBzMQIUeSW3gJ50R6pJ4pr289js zSwCbYBcrWNSpomWwX2J+CCCwDT3uy0oXTse8+kmyM+pcTRx6AGYAcy0y h/bXbmv1Nf3I21nO1KgrrGAReumnJHM5HwLzKpwGIUiu1x8YtLkG37TeO Jg1nSXfXa9tpngqGHCxU3t5g3k2fIfYWeNapN36lxPmKiMv5lC6FXZAfd g==; X-CSE-ConnectionGUID: 6niEZGEgSEKPUHaXvXYySA== X-CSE-MsgGUID: 97C1N2jHTGe8WyIhTWTzZg== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892419" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892419" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:45 -0800 X-CSE-ConnectionGUID: Y2E0TXOqSbOzOeX7MMc5Bg== X-CSE-MsgGUID: 8O3YmawiQUC13eLdFqb+UA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162716" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:45 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:38 -0800 Subject: [PATCH net-next RFC v6 1/9] lib: packing: create __pack() and __unpack() variants without error checking Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-1-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller , Vladimir Oltean X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Vladimir Oltean A future variant of the API, which works on arrays of packed_field structures, will make most of these checks redundant. The idea will be that we want to perform sanity checks at compile time, not once for every function call. Introduce new variants of pack() and unpack(), which elide the sanity checks, assuming that the input was pre-sanitized. Signed-off-by: Vladimir Oltean Signed-off-by: Jacob Keller --- lib/packing.c | 142 ++++++++++++++++++++++++++++++++-------------------------- 1 file changed, 78 insertions(+), 64 deletions(-) diff --git a/lib/packing.c b/lib/packing.c index 793942745e34..f237b8af99f5 100644 --- a/lib/packing.c +++ b/lib/packing.c @@ -51,64 +51,20 @@ static size_t calculate_box_addr(size_t box, size_t len, u8 quirks) return offset_of_group + offset_in_group; } -/** - * pack - Pack u64 number into bitfield of buffer. - * - * @pbuf: Pointer to a buffer holding the packed value. - * @uval: CPU-readable unpacked value to pack. - * @startbit: The index (in logical notation, compensated for quirks) where - * the packed value starts within pbuf. Must be larger than, or - * equal to, endbit. - * @endbit: The index (in logical notation, compensated for quirks) where - * the packed value ends within pbuf. Must be smaller than, or equal - * to, startbit. - * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf. - * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and - * QUIRK_MSB_ON_THE_RIGHT. - * - * Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming - * correct usage, return code may be discarded. The @pbuf memory will - * be modified on success. - */ -int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen, - u8 quirks) +static void __pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, + size_t pbuflen, u8 quirks) { /* Logical byte indices corresponding to the * start and end of the field. */ - int plogical_first_u8, plogical_last_u8, box; - /* width of the field to access in the pbuf */ - u64 value_width; - - /* startbit is expected to be larger than endbit, and both are - * expected to be within the logically addressable range of the buffer. - */ - if (unlikely(startbit < endbit || startbit >= BITS_PER_BYTE * pbuflen)) - /* Invalid function call */ - return -EINVAL; - - value_width = startbit - endbit + 1; - if (unlikely(value_width > 64)) - return -ERANGE; - - /* Check if "uval" fits in "value_width" bits. - * If value_width is 64, the check will fail, but any - * 64-bit uval will surely fit. - */ - if (unlikely(value_width < 64 && uval >= (1ull << value_width))) - /* Cannot store "uval" inside "value_width" bits. - * Truncating "uval" is most certainly not desirable, - * so simply erroring out is appropriate. - */ - return -ERANGE; + int plogical_first_u8 = startbit / BITS_PER_BYTE; + int plogical_last_u8 = endbit / BITS_PER_BYTE; + int box; /* Iterate through an idealistic view of the pbuf as an u64 with * no quirks, u8 by u8 (aligned at u8 boundaries), from high to low * logical bit significance. "box" denotes the current logical u8. */ - plogical_first_u8 = startbit / BITS_PER_BYTE; - plogical_last_u8 = endbit / BITS_PER_BYTE; - for (box = plogical_first_u8; box >= plogical_last_u8; box--) { /* Bit indices into the currently accessed 8-bit box */ size_t box_start_bit, box_end_bit, box_addr; @@ -163,15 +119,13 @@ int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen, ((u8 *)pbuf)[box_addr] &= ~box_mask; ((u8 *)pbuf)[box_addr] |= pval; } - return 0; } -EXPORT_SYMBOL(pack); /** - * unpack - Unpack u64 number from packed buffer. + * pack - Pack u64 number into bitfield of buffer. * * @pbuf: Pointer to a buffer holding the packed value. - * @uval: Pointer to an u64 holding the unpacked value. + * @uval: CPU-readable unpacked value to pack. * @startbit: The index (in logical notation, compensated for quirks) where * the packed value starts within pbuf. Must be larger than, or * equal to, endbit. @@ -183,16 +137,12 @@ EXPORT_SYMBOL(pack); * QUIRK_MSB_ON_THE_RIGHT. * * Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming - * correct usage, return code may be discarded. The @uval will be - * modified on success. + * correct usage, return code may be discarded. The @pbuf memory will + * be modified on success. */ -int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit, - size_t pbuflen, u8 quirks) +int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen, + u8 quirks) { - /* Logical byte indices corresponding to the - * start and end of the field. - */ - int plogical_first_u8, plogical_last_u8, box; /* width of the field to access in the pbuf */ u64 value_width; @@ -207,6 +157,33 @@ int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit, if (unlikely(value_width > 64)) return -ERANGE; + /* Check if "uval" fits in "value_width" bits. + * If value_width is 64, the check will fail, but any + * 64-bit uval will surely fit. + */ + if (value_width < 64 && uval >= (1ull << value_width)) + /* Cannot store "uval" inside "value_width" bits. + * Truncating "uval" is most certainly not desirable, + * so simply erroring out is appropriate. + */ + return -ERANGE; + + __pack(pbuf, uval, startbit, endbit, pbuflen, quirks); + + return 0; +} +EXPORT_SYMBOL(pack); + +static void __unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit, + size_t pbuflen, u8 quirks) +{ + /* Logical byte indices corresponding to the + * start and end of the field. + */ + int plogical_first_u8 = startbit / BITS_PER_BYTE; + int plogical_last_u8 = endbit / BITS_PER_BYTE; + int box; + /* Initialize parameter */ *uval = 0; @@ -214,9 +191,6 @@ int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit, * no quirks, u8 by u8 (aligned at u8 boundaries), from high to low * logical bit significance. "box" denotes the current logical u8. */ - plogical_first_u8 = startbit / BITS_PER_BYTE; - plogical_last_u8 = endbit / BITS_PER_BYTE; - for (box = plogical_first_u8; box >= plogical_last_u8; box--) { /* Bit indices into the currently accessed 8-bit box */ size_t box_start_bit, box_end_bit, box_addr; @@ -271,6 +245,46 @@ int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit, *uval &= ~proj_mask; *uval |= pval; } +} + +/** + * unpack - Unpack u64 number from packed buffer. + * + * @pbuf: Pointer to a buffer holding the packed value. + * @uval: Pointer to an u64 holding the unpacked value. + * @startbit: The index (in logical notation, compensated for quirks) where + * the packed value starts within pbuf. Must be larger than, or + * equal to, endbit. + * @endbit: The index (in logical notation, compensated for quirks) where + * the packed value ends within pbuf. Must be smaller than, or equal + * to, startbit. + * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf. + * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and + * QUIRK_MSB_ON_THE_RIGHT. + * + * Return: 0 on success, EINVAL or ERANGE if called incorrectly. Assuming + * correct usage, return code may be discarded. The @uval will be + * modified on success. + */ +int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit, + size_t pbuflen, u8 quirks) +{ + /* width of the field to access in the pbuf */ + u64 value_width; + + /* startbit is expected to be larger than endbit, and both are + * expected to be within the logically addressable range of the buffer. + */ + if (startbit < endbit || startbit >= BITS_PER_BYTE * pbuflen) + /* Invalid function call */ + return -EINVAL; + + value_width = startbit - endbit + 1; + if (value_width > 64) + return -ERANGE; + + __unpack(pbuf, uval, startbit, endbit, pbuflen, quirks); + return 0; } EXPORT_SYMBOL(unpack); From patchwork Tue Nov 19 00:23:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879232 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 670FEDDAB; Tue, 19 Nov 2024 00:23:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975830; cv=none; b=VgSm8zOvazTVT7p32GpdMXmYEQF1mtZuXYcIZlfBgMwJL/EGr7LNslgNW8CY8dgZd/M6TxWFA548cqGJtJp+tq+4rUWfH31P0/DyrMBzxbHwjm8d0ogUNAwaT4FOsgUucT3Qv/3JcNz3kQxi3hmitoYkEHgaNvw2Os8qYuwRhhs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975830; c=relaxed/simple; bh=MlXbHuvt3wGllLux/V5NooK/PasQZbsIt3aVkig4UAk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=mZyhTwk2gciEa0y9wrHJs7/OIHqPXbVIbPeWCL3GnJWQlXE1Wst3zKXbKSXIl4GRxWoQg8Yzp1oV4OZyok1IYzwyVL0uub93F6vQPtt+XCRRD4/vNAZXDpIGkSs7leHY85PZQHAAv2xmGbpuN9vYfRv3OgjrkD5GfEXxtcdf6l4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=aZmovvBu; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="aZmovvBu" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975828; x=1763511828; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=MlXbHuvt3wGllLux/V5NooK/PasQZbsIt3aVkig4UAk=; b=aZmovvBuxj/h2vURBEraM86cuVB9NMKFOWIYbl/vcTbSHMnIBt2SyZCw VgGHDE8bejeFpTKK4At2bKfVdAD1QFXnqWMY/JiiCc52xkjqnGoXx+hL6 vBRoVbmSpdrSizu64t931H3vprURV+IK7vim8oCC8BxjSnrFdqO37OHb2 mQgGLZu1EPZlhksWAfFFQOeVv+Xt/yxi8M6ukBwUZLrywN5O9Kp2HQZon l7eM1477NDaowFFCuKuMKzSxzDGEwRQEECVR9Ga3YKqipBRQ0ndlvIQE2 coDunaOJ3riIIuolW7WdpMkQjCxDTlzetEbhiENGnaP9FgBMwKbCL7UQ4 A==; X-CSE-ConnectionGUID: DttxobzSSqS+1VZD8DZSQA== X-CSE-MsgGUID: pSdI3NqrQDSvamAUB7RdKA== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892426" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892426" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:45 -0800 X-CSE-ConnectionGUID: PScl7W65QUCQvpnlII99Uw== X-CSE-MsgGUID: 0iza6fo8QwmsKgUE0/iFAA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162719" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:45 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:39 -0800 Subject: [PATCH net-next RFC v6 2/9] lib: packing: demote truncation error in pack() to a warning in __pack() Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-2-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller , Vladimir Oltean X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Vladimir Oltean Most of the sanity checks in pack() and unpack() can be covered at compile time. There is only one exception, and that is truncation of the uval during a pack() operation. We'd like the error-less __pack() to catch that condition as well. But at the same time, it is currently the responsibility of consumer drivers (currently just sja1105) to print anything at all when this error occurs, and then discard the return code. We can just print a loud warning in the library code and continue with the truncated __pack() operation. In practice, having the warning is very important, see commit 24deec6b9e4a ("net: dsa: sja1105: disallow C45 transactions on the BASE-TX MDIO bus") where the bug was caught exactly by noticing this print. Add the first print to the packing library, and at the same time remove the print for the same condition from the sja1105 driver, to avoid double printing. Signed-off-by: Vladimir Oltean --- drivers/net/dsa/sja1105/sja1105_static_config.c | 8 ++------ lib/packing.c | 26 ++++++++++--------------- 2 files changed, 12 insertions(+), 22 deletions(-) diff --git a/drivers/net/dsa/sja1105/sja1105_static_config.c b/drivers/net/dsa/sja1105/sja1105_static_config.c index baba204ad62f..3d790f8c6f4d 100644 --- a/drivers/net/dsa/sja1105/sja1105_static_config.c +++ b/drivers/net/dsa/sja1105/sja1105_static_config.c @@ -26,12 +26,8 @@ void sja1105_pack(void *buf, const u64 *val, int start, int end, size_t len) pr_err("Start bit (%d) expected to be larger than end (%d)\n", start, end); } else if (rc == -ERANGE) { - if ((start - end + 1) > 64) - pr_err("Field %d-%d too large for 64 bits!\n", - start, end); - else - pr_err("Cannot store %llx inside bits %d-%d (would truncate)\n", - *val, start, end); + pr_err("Field %d-%d too large for 64 bits!\n", + start, end); } dump_stack(); } diff --git a/lib/packing.c b/lib/packing.c index f237b8af99f5..09a2d195b943 100644 --- a/lib/packing.c +++ b/lib/packing.c @@ -59,8 +59,17 @@ static void __pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, */ int plogical_first_u8 = startbit / BITS_PER_BYTE; int plogical_last_u8 = endbit / BITS_PER_BYTE; + int value_width = startbit - endbit + 1; int box; + /* Check if "uval" fits in "value_width" bits. + * The test only works for value_width < 64, but in the latter case, + * any 64-bit uval will surely fit. + */ + WARN(value_width < 64 && uval >= (1ull << value_width), + "Cannot store 0x%llx inside bits %zu-%zu - will truncate\n", + uval, startbit, endbit); + /* Iterate through an idealistic view of the pbuf as an u64 with * no quirks, u8 by u8 (aligned at u8 boundaries), from high to low * logical bit significance. "box" denotes the current logical u8. @@ -143,9 +152,6 @@ static void __pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen, u8 quirks) { - /* width of the field to access in the pbuf */ - u64 value_width; - /* startbit is expected to be larger than endbit, and both are * expected to be within the logically addressable range of the buffer. */ @@ -153,19 +159,7 @@ int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen, /* Invalid function call */ return -EINVAL; - value_width = startbit - endbit + 1; - if (unlikely(value_width > 64)) - return -ERANGE; - - /* Check if "uval" fits in "value_width" bits. - * If value_width is 64, the check will fail, but any - * 64-bit uval will surely fit. - */ - if (value_width < 64 && uval >= (1ull << value_width)) - /* Cannot store "uval" inside "value_width" bits. - * Truncating "uval" is most certainly not desirable, - * so simply erroring out is appropriate. - */ + if (unlikely(startbit - endbit >= 64)) return -ERANGE; __pack(pbuf, uval, startbit, endbit, pbuflen, quirks); From patchwork Tue Nov 19 00:23:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879238 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0853538FA6; Tue, 19 Nov 2024 00:23:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975839; cv=none; b=akVeHGpBiWJ1WEN1fONlFmUMLG1KBE06H3EtTdh7aSNAxoiz+zA/kKcH8SiEw/4dt9wTtxK+exfqluKrYH5Ew/TRA9/WaSX5mwcTi2OxLjtcdOSDcHRQaNsAIfzMvWIw3M9gaF66Ki4NnSMYg+VY9cx/zkpATzG2I0qwrlJJgSc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975839; c=relaxed/simple; bh=ZGgbafPWUd6headPhObraj3egyD3mkI/e0q1O2vE/0s=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=FdZS2FKX7b65H2yENBYMDXHzClrIkrZDBI8nHjpmqar+2PZhpD1sqrCt6cbLRW6V6koo62KqYwUBFp4D5u7qQwojgETajiPeksgQksXKT4WjG8yy/z9kxv3Y73zsVftIN8DmAar9YtoCjDSkC4hGHUNpOBdPpWTHyj2NCG7UAqo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YkPRP41c; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YkPRP41c" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975832; x=1763511832; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=ZGgbafPWUd6headPhObraj3egyD3mkI/e0q1O2vE/0s=; b=YkPRP41c1eDBr/9ZPhBW2/p/U+hRt3qjEhMpLeioKaKhsfEkcND+Odss 88PZvyAqm6gcUNHNGV3mHniiGYoD2Z1Myh6jqGuewYwrNopnW9gIIvNvJ z/q4RMqKXpQ7zeDk5QU0WcOfVIQlIoFIOPIpjx2JpyFNsbJ2XiibEypB7 CowVwyK6yYSFS+gmcMhdA23auURzTCZFxp06XaKmPst3/9GWrzhY0SUdx ZwNG8eT+k7Ycvua+FcA+FM1XLpNgxQRD2i2klGHza3/4Kxnpe1Ina1s58 LJyqs2Zo1myabZH3Gh6MTvDaD00QXNnSdjY/IzQf9+vhvgYFpt8f35T5r Q==; X-CSE-ConnectionGUID: UUzA7MvxQHaKnBviQOPT9g== X-CSE-MsgGUID: YlTFFA81Qh2iwyXVORqyBQ== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892444" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892444" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 X-CSE-ConnectionGUID: 3u92MDmURomLYpR+Qt9o3g== X-CSE-MsgGUID: D8qLmzJXS6iYqinjdWUi6g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162722" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:40 -0800 Subject: [PATCH net-next RFC v6 3/9] lib: packing: add pack_fields() and unpack_fields() Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-3-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller , Vladimir Oltean X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Vladimir Oltean This is new API which caters to the following requirements: - Pack or unpack a large number of fields to/from a buffer with a small code footprint. The current alternative is to open-code a large number of calls to pack() and unpack(), or to use packing() to reduce that number to half. But packing() is not const-correct. - Use unpacked numbers stored in variables smaller than u64. This reduces the rodata footprint of the stored field arrays. - Perform error checking at compile time, rather than runtime, and return void from the API functions. Because the C preprocessor can't generate variable length code (loops), this is a bit tricky to do with macros. To handle this, implement macros which sanity check the packed field definitions based on their size. Finally, a single macro with a chain of __builtin_choose_expr() is used to select the appropriate macros. We enforce the use of ascending or descending order to avoid O(N^2) scaling when checking for overlap. Note that the macros are written with care to ensure that the compilers can correctly evaluate the resulting code at compile time. In particular, the expressions for the pbuflen and the ordering check are passed all the way down via macros. Earlier versions attempted to use statement expressions with local variables, but not all compilers were able to fully analyze these at compile time, resulting in BUILD_BUG_ON failures. The overlap macro is passed a condition determining whether the fields are expected to be in ascending or descending order based on the relative ordering of the first two fields. This allows users to keep the fields in whichever order is most natural for their hardware, while still keeping the overlap checks scaling to O(N). This method also enables calling CHECK_PACKED_FIELDS directly from within the pack_fields and unpack_fields macros, ensuring all drivers using this API will receive type checking, without needing to remember to call the CHECK_PACKED_FIELDS macro themselves. - Reduced rodata footprint for the storage of the packed field arrays. To that end, we have struct packed_field_s (small) and packed_field_m (medium). More can be added as needed (unlikely for now). On these types, the same generic pack_fields() and unpack_fields() API can be used, thanks to the new C11 _Generic() selection feature, which can call pack_fields_s() or pack_fields_m(), depending on the type of the "fields" array - a simplistic form of polymorphism. It is evaluated at compile time which function will actually be called. Over time, packing() is expected to be completely replaced either with pack() or with pack_fields(). Signed-off-by: Vladimir Oltean Co-developed-by: Jacob Keller Signed-off-by: Jacob Keller --- Makefile | 4 + include/linux/packing.h | 37 + include/linux/packing_types.h | 2831 ++++++++++++++++++++++++++++++++++++ lib/packing.c | 145 ++ lib/packing_test.c | 61 + scripts/gen_packed_field_checks.c | 38 + Documentation/core-api/packing.rst | 58 + MAINTAINERS | 2 + scripts/Makefile | 2 +- 9 files changed, 3177 insertions(+), 1 deletion(-) diff --git a/Makefile b/Makefile index 79192a3024bf..f04f3b33f797 100644 --- a/Makefile +++ b/Makefile @@ -1307,6 +1307,10 @@ PHONY += scripts_unifdef scripts_unifdef: scripts_basic $(Q)$(MAKE) $(build)=scripts scripts/unifdef +PHONY += scripts_gen_packed_field_checks +scripts_gen_packed_field_checks: scripts_basic + $(Q)$(MAKE) $(build)=scripts scripts/gen_packed_field_checks + # --------------------------------------------------------------------------- # Install diff --git a/include/linux/packing.h b/include/linux/packing.h index 5d36dcd06f60..f9bfb2006030 100644 --- a/include/linux/packing.h +++ b/include/linux/packing.h @@ -7,6 +7,7 @@ #include #include +#include #define QUIRK_MSB_ON_THE_RIGHT BIT(0) #define QUIRK_LITTLE_ENDIAN BIT(1) @@ -26,4 +27,40 @@ int pack(void *pbuf, u64 uval, size_t startbit, size_t endbit, size_t pbuflen, int unpack(const void *pbuf, u64 *uval, size_t startbit, size_t endbit, size_t pbuflen, u8 quirks); +void pack_fields_s(void *pbuf, size_t pbuflen, const void *ustruct, + const struct packed_field_s *fields, size_t num_fields, + u8 quirks); + +void pack_fields_m(void *pbuf, size_t pbuflen, const void *ustruct, + const struct packed_field_m *fields, size_t num_fields, + u8 quirks); + +void unpack_fields_s(const void *pbuf, size_t pbuflen, void *ustruct, + const struct packed_field_s *fields, size_t num_fields, + u8 quirks); + +void unpack_fields_m(const void *pbuf, size_t pbuflen, void *ustruct, + const struct packed_field_m *fields, size_t num_fields, + u8 quirks); + +#define pack_fields(pbuf, pbuflen, ustruct, fields, quirks) \ + ({ \ + CHECK_PACKED_FIELDS(fields); \ + CHECK_PACKED_FIELDS_SIZE((fields), (pbuflen)); \ + _Generic((fields), \ + const struct packed_field_s * : pack_fields_s, \ + const struct packed_field_m * : pack_fields_m \ + )((pbuf), (pbuflen), (ustruct), (fields), ARRAY_SIZE(fields), (quirks)); \ + }) + +#define unpack_fields(pbuf, pbuflen, ustruct, fields, quirks) \ + ({ \ + CHECK_PACKED_FIELDS(fields); \ + CHECK_PACKED_FIELDS_SIZE((fields), (pbuflen)); \ + _Generic((fields), \ + const struct packed_field_s * : unpack_fields_s, \ + const struct packed_field_m * : unpack_fields_m \ + )((pbuf), (pbuflen), (ustruct), (fields), ARRAY_SIZE(fields), (quirks)); \ + }) + #endif diff --git a/include/linux/packing_types.h b/include/linux/packing_types.h new file mode 100644 index 000000000000..3bcc46850a59 --- /dev/null +++ b/include/linux/packing_types.h @@ -0,0 +1,2831 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2024, Intel Corporation + * Copyright (c) 2024, Vladimir Oltean + */ +#ifndef _LINUX_PACKING_TYPES_H +#define _LINUX_PACKING_TYPES_H + +#include + +/* If you add another packed field type, please update + * scripts/mod/packed_fields.c to enable compile time sanity checks. + */ + +#define GEN_PACKED_FIELD_MEMBERS(__type) \ + __type startbit; \ + __type endbit; \ + __type offset; \ + __type size + +/* Small packed field. Use with bit offsets < 256, buffers < 32B and + * unpacked structures < 256B. + */ +struct packed_field_s { + GEN_PACKED_FIELD_MEMBERS(u8); +}; + +/* Medium packed field. Use with bit offsets < 65536, buffers < 8KB and + * unpacked structures < 64KB. + */ +struct packed_field_m { + GEN_PACKED_FIELD_MEMBERS(u16); +}; + +#define PACKED_FIELD(start, end, struct_name, struct_field) \ +{ \ + (start), \ + (end), \ + offsetof(struct_name, struct_field), \ + sizeof_field(struct_name, struct_field), \ +} + +#define CHECK_PACKED_FIELD(field) ({ \ + typeof(field) __f = (field); \ + BUILD_BUG_ON(__f.startbit < __f.endbit); \ + BUILD_BUG_ON(__f.startbit - __f.endbit >= BITS_PER_BYTE * __f.size); \ + BUILD_BUG_ON(__f.size != 1 && __f.size != 2 && \ + __f.size != 4 && __f.size != 8); \ +}) + + +#define CHECK_PACKED_FIELD_OVERLAP(ascending, field1, field2) ({ \ + typeof(field1) _f1 = (field1); typeof(field2) _f2 = (field2); \ + const bool _a = (ascending); \ + BUILD_BUG_ON(_a && _f1.startbit >= _f2.startbit); \ + BUILD_BUG_ON(!_a && _f1.startbit <= _f2.startbit); \ + BUILD_BUG_ON(max(_f1.endbit, _f2.endbit) <= \ + min(_f1.startbit, _f2.startbit)); \ +}) + +#define CHECK_PACKED_FIELDS_SIZE(fields, pbuflen) ({ \ + typeof(&(fields)[0]) _f = (fields); \ + typeof(pbuflen) _len = (pbuflen); \ + const size_t num_fields = ARRAY_SIZE(fields); \ + BUILD_BUG_ON(!__builtin_constant_p(_len)); \ + BUILD_BUG_ON(_f[0].startbit >= BITS_PER_BYTE * _len); \ + BUILD_BUG_ON(_f[num_fields - 1].startbit >= BITS_PER_BYTE * _len); \ +}) + +/* Do not hand-edit the following packed field check macros! + * + * They are generated using scripts/gen_packed_field_checks.c, which may be + * built via "make scripts_gen_packed_field_checks". If larger macro sizes are + * needed in the future, please use this program to re-generate the macros and + * insert them here. + */ + +#define CHECK_PACKED_FIELDS_1(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 1); \ + CHECK_PACKED_FIELD(_f[0]); }) + +#define CHECK_PACKED_FIELDS_2(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 2); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); }) + +#define CHECK_PACKED_FIELDS_3(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 3); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); }) + +#define CHECK_PACKED_FIELDS_4(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 4); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); }) + +#define CHECK_PACKED_FIELDS_5(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 5); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); }) + +#define CHECK_PACKED_FIELDS_6(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 6); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); }) + +#define CHECK_PACKED_FIELDS_7(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 7); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); }) + +#define CHECK_PACKED_FIELDS_8(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 8); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); }) + +#define CHECK_PACKED_FIELDS_9(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 9); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); }) + +#define CHECK_PACKED_FIELDS_10(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 10); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); }) + +#define CHECK_PACKED_FIELDS_11(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 11); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); }) + +#define CHECK_PACKED_FIELDS_12(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 12); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); }) + +#define CHECK_PACKED_FIELDS_13(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 13); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); }) + +#define CHECK_PACKED_FIELDS_14(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 14); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); }) + +#define CHECK_PACKED_FIELDS_15(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 15); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); }) + +#define CHECK_PACKED_FIELDS_16(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 16); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); }) + +#define CHECK_PACKED_FIELDS_17(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 17); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); }) + +#define CHECK_PACKED_FIELDS_18(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 18); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); }) + +#define CHECK_PACKED_FIELDS_19(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 19); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); }) + +#define CHECK_PACKED_FIELDS_20(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 20); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); }) + +#define CHECK_PACKED_FIELDS_21(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 21); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); }) + +#define CHECK_PACKED_FIELDS_22(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 22); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); }) + +#define CHECK_PACKED_FIELDS_23(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 23); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); }) + +#define CHECK_PACKED_FIELDS_24(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 24); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); }) + +#define CHECK_PACKED_FIELDS_25(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 25); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); }) + +#define CHECK_PACKED_FIELDS_26(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 26); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); }) + +#define CHECK_PACKED_FIELDS_27(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 27); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); }) + +#define CHECK_PACKED_FIELDS_28(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 28); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); }) + +#define CHECK_PACKED_FIELDS_29(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 29); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); }) + +#define CHECK_PACKED_FIELDS_30(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 30); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); }) + +#define CHECK_PACKED_FIELDS_31(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 31); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); }) + +#define CHECK_PACKED_FIELDS_32(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 32); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); }) + +#define CHECK_PACKED_FIELDS_33(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 33); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); }) + +#define CHECK_PACKED_FIELDS_34(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 34); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); }) + +#define CHECK_PACKED_FIELDS_35(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 35); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); }) + +#define CHECK_PACKED_FIELDS_36(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 36); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); }) + +#define CHECK_PACKED_FIELDS_37(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 37); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); }) + +#define CHECK_PACKED_FIELDS_38(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 38); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); }) + +#define CHECK_PACKED_FIELDS_39(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 39); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); }) + +#define CHECK_PACKED_FIELDS_40(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 40); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); }) + +#define CHECK_PACKED_FIELDS_41(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 41); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); }) + +#define CHECK_PACKED_FIELDS_42(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 42); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD(_f[41]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[40], _f[41]); }) + +#define CHECK_PACKED_FIELDS_43(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 43); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD(_f[41]); \ + CHECK_PACKED_FIELD(_f[42]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[40], _f[41]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[41], _f[42]); }) + +#define CHECK_PACKED_FIELDS_44(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 44); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD(_f[41]); \ + CHECK_PACKED_FIELD(_f[42]); \ + CHECK_PACKED_FIELD(_f[43]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[40], _f[41]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[41], _f[42]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[42], _f[43]); }) + +#define CHECK_PACKED_FIELDS_45(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 45); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD(_f[41]); \ + CHECK_PACKED_FIELD(_f[42]); \ + CHECK_PACKED_FIELD(_f[43]); \ + CHECK_PACKED_FIELD(_f[44]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[40], _f[41]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[41], _f[42]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[42], _f[43]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[43], _f[44]); }) + +#define CHECK_PACKED_FIELDS_46(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 46); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD(_f[41]); \ + CHECK_PACKED_FIELD(_f[42]); \ + CHECK_PACKED_FIELD(_f[43]); \ + CHECK_PACKED_FIELD(_f[44]); \ + CHECK_PACKED_FIELD(_f[45]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[40], _f[41]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[41], _f[42]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[42], _f[43]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[43], _f[44]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[44], _f[45]); }) + +#define CHECK_PACKED_FIELDS_47(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 47); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD(_f[41]); \ + CHECK_PACKED_FIELD(_f[42]); \ + CHECK_PACKED_FIELD(_f[43]); \ + CHECK_PACKED_FIELD(_f[44]); \ + CHECK_PACKED_FIELD(_f[45]); \ + CHECK_PACKED_FIELD(_f[46]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[40], _f[41]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[41], _f[42]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[42], _f[43]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[43], _f[44]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[44], _f[45]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[45], _f[46]); }) + +#define CHECK_PACKED_FIELDS_48(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 48); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD(_f[41]); \ + CHECK_PACKED_FIELD(_f[42]); \ + CHECK_PACKED_FIELD(_f[43]); \ + CHECK_PACKED_FIELD(_f[44]); \ + CHECK_PACKED_FIELD(_f[45]); \ + CHECK_PACKED_FIELD(_f[46]); \ + CHECK_PACKED_FIELD(_f[47]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[40], _f[41]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[41], _f[42]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[42], _f[43]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[43], _f[44]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[44], _f[45]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[45], _f[46]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[46], _f[47]); }) + +#define CHECK_PACKED_FIELDS_49(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 49); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD(_f[41]); \ + CHECK_PACKED_FIELD(_f[42]); \ + CHECK_PACKED_FIELD(_f[43]); \ + CHECK_PACKED_FIELD(_f[44]); \ + CHECK_PACKED_FIELD(_f[45]); \ + CHECK_PACKED_FIELD(_f[46]); \ + CHECK_PACKED_FIELD(_f[47]); \ + CHECK_PACKED_FIELD(_f[48]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[40], _f[41]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[41], _f[42]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[42], _f[43]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[43], _f[44]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[44], _f[45]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[45], _f[46]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[46], _f[47]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[47], _f[48]); }) + +#define CHECK_PACKED_FIELDS_50(fields) \ + ({ typeof(&(fields)[0]) _f = (fields); \ + BUILD_BUG_ON(ARRAY_SIZE(fields) != 50); \ + CHECK_PACKED_FIELD(_f[0]); \ + CHECK_PACKED_FIELD(_f[1]); \ + CHECK_PACKED_FIELD(_f[2]); \ + CHECK_PACKED_FIELD(_f[3]); \ + CHECK_PACKED_FIELD(_f[4]); \ + CHECK_PACKED_FIELD(_f[5]); \ + CHECK_PACKED_FIELD(_f[6]); \ + CHECK_PACKED_FIELD(_f[7]); \ + CHECK_PACKED_FIELD(_f[8]); \ + CHECK_PACKED_FIELD(_f[9]); \ + CHECK_PACKED_FIELD(_f[10]); \ + CHECK_PACKED_FIELD(_f[11]); \ + CHECK_PACKED_FIELD(_f[12]); \ + CHECK_PACKED_FIELD(_f[13]); \ + CHECK_PACKED_FIELD(_f[14]); \ + CHECK_PACKED_FIELD(_f[15]); \ + CHECK_PACKED_FIELD(_f[16]); \ + CHECK_PACKED_FIELD(_f[17]); \ + CHECK_PACKED_FIELD(_f[18]); \ + CHECK_PACKED_FIELD(_f[19]); \ + CHECK_PACKED_FIELD(_f[20]); \ + CHECK_PACKED_FIELD(_f[21]); \ + CHECK_PACKED_FIELD(_f[22]); \ + CHECK_PACKED_FIELD(_f[23]); \ + CHECK_PACKED_FIELD(_f[24]); \ + CHECK_PACKED_FIELD(_f[25]); \ + CHECK_PACKED_FIELD(_f[26]); \ + CHECK_PACKED_FIELD(_f[27]); \ + CHECK_PACKED_FIELD(_f[28]); \ + CHECK_PACKED_FIELD(_f[29]); \ + CHECK_PACKED_FIELD(_f[30]); \ + CHECK_PACKED_FIELD(_f[31]); \ + CHECK_PACKED_FIELD(_f[32]); \ + CHECK_PACKED_FIELD(_f[33]); \ + CHECK_PACKED_FIELD(_f[34]); \ + CHECK_PACKED_FIELD(_f[35]); \ + CHECK_PACKED_FIELD(_f[36]); \ + CHECK_PACKED_FIELD(_f[37]); \ + CHECK_PACKED_FIELD(_f[38]); \ + CHECK_PACKED_FIELD(_f[39]); \ + CHECK_PACKED_FIELD(_f[40]); \ + CHECK_PACKED_FIELD(_f[41]); \ + CHECK_PACKED_FIELD(_f[42]); \ + CHECK_PACKED_FIELD(_f[43]); \ + CHECK_PACKED_FIELD(_f[44]); \ + CHECK_PACKED_FIELD(_f[45]); \ + CHECK_PACKED_FIELD(_f[46]); \ + CHECK_PACKED_FIELD(_f[47]); \ + CHECK_PACKED_FIELD(_f[48]); \ + CHECK_PACKED_FIELD(_f[49]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[0], _f[1]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[1], _f[2]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[2], _f[3]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[3], _f[4]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[4], _f[5]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[5], _f[6]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[6], _f[7]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[7], _f[8]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[8], _f[9]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[9], _f[10]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[10], _f[11]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[11], _f[12]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[12], _f[13]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[13], _f[14]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[14], _f[15]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[15], _f[16]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[16], _f[17]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[17], _f[18]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[18], _f[19]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[19], _f[20]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[20], _f[21]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[21], _f[22]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[22], _f[23]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[23], _f[24]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[24], _f[25]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[25], _f[26]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[26], _f[27]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[27], _f[28]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[28], _f[29]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[29], _f[30]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[30], _f[31]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[31], _f[32]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[32], _f[33]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[33], _f[34]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[34], _f[35]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[35], _f[36]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[36], _f[37]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[37], _f[38]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[38], _f[39]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[39], _f[40]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[40], _f[41]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[41], _f[42]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[42], _f[43]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[43], _f[44]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[44], _f[45]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[45], _f[46]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[46], _f[47]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[47], _f[48]); \ + CHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[48], _f[49]); }) + +#define CHECK_PACKED_FIELDS(fields) \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 1, CHECK_PACKED_FIELDS_1(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 2, CHECK_PACKED_FIELDS_2(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 3, CHECK_PACKED_FIELDS_3(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 4, CHECK_PACKED_FIELDS_4(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 5, CHECK_PACKED_FIELDS_5(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 6, CHECK_PACKED_FIELDS_6(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 7, CHECK_PACKED_FIELDS_7(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 8, CHECK_PACKED_FIELDS_8(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 9, CHECK_PACKED_FIELDS_9(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 10, CHECK_PACKED_FIELDS_10(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 11, CHECK_PACKED_FIELDS_11(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 12, CHECK_PACKED_FIELDS_12(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 13, CHECK_PACKED_FIELDS_13(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 14, CHECK_PACKED_FIELDS_14(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 15, CHECK_PACKED_FIELDS_15(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 16, CHECK_PACKED_FIELDS_16(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 17, CHECK_PACKED_FIELDS_17(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 18, CHECK_PACKED_FIELDS_18(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 19, CHECK_PACKED_FIELDS_19(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 20, CHECK_PACKED_FIELDS_20(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 21, CHECK_PACKED_FIELDS_21(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 22, CHECK_PACKED_FIELDS_22(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 23, CHECK_PACKED_FIELDS_23(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 24, CHECK_PACKED_FIELDS_24(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 25, CHECK_PACKED_FIELDS_25(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 26, CHECK_PACKED_FIELDS_26(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 27, CHECK_PACKED_FIELDS_27(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 28, CHECK_PACKED_FIELDS_28(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 29, CHECK_PACKED_FIELDS_29(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 30, CHECK_PACKED_FIELDS_30(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 31, CHECK_PACKED_FIELDS_31(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 32, CHECK_PACKED_FIELDS_32(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 33, CHECK_PACKED_FIELDS_33(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 34, CHECK_PACKED_FIELDS_34(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 35, CHECK_PACKED_FIELDS_35(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 36, CHECK_PACKED_FIELDS_36(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 37, CHECK_PACKED_FIELDS_37(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 38, CHECK_PACKED_FIELDS_38(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 39, CHECK_PACKED_FIELDS_39(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 40, CHECK_PACKED_FIELDS_40(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 41, CHECK_PACKED_FIELDS_41(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 42, CHECK_PACKED_FIELDS_42(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 43, CHECK_PACKED_FIELDS_43(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 44, CHECK_PACKED_FIELDS_44(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 45, CHECK_PACKED_FIELDS_45(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 46, CHECK_PACKED_FIELDS_46(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 47, CHECK_PACKED_FIELDS_47(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 48, CHECK_PACKED_FIELDS_48(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 49, CHECK_PACKED_FIELDS_49(fields), \ + __builtin_choose_expr(ARRAY_SIZE(fields) == 50, CHECK_PACKED_FIELDS_50(fields), \ + ({ BUILD_BUG_ON_MSG(1, "CHECK_PACKED_FIELDS() must be regenerated to support array sizes larger than 50."); }) \ + )))))))))))))))))))))))))))))))))))))))))))))))))) + +#endif diff --git a/lib/packing.c b/lib/packing.c index 09a2d195b943..45164f73fe5b 100644 --- a/lib/packing.c +++ b/lib/packing.c @@ -5,10 +5,37 @@ #include #include #include +#include #include #include #include +#define __pack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks) \ + ({ \ + for (size_t i = 0; i < (num_fields); i++) { \ + typeof(&(fields)[0]) field = &(fields)[i]; \ + u64 uval; \ + \ + uval = ustruct_field_to_u64(ustruct, field->offset, field->size); \ + \ + __pack(pbuf, uval, field->startbit, field->endbit, \ + pbuflen, quirks); \ + } \ + }) + +#define __unpack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks) \ + ({ \ + for (size_t i = 0; i < (num_fields); i++) { \ + typeof(&(fields)[0]) field = &fields[i]; \ + u64 uval; \ + \ + __unpack(pbuf, &uval, field->startbit, field->endbit, \ + pbuflen, quirks); \ + \ + u64_to_ustruct_field(ustruct, field->offset, field->size, uval); \ + } \ + }) + /** * calculate_box_addr - Determine physical location of byte in buffer * @box: Index of byte within buffer seen as a logical big-endian big number @@ -322,4 +349,122 @@ int packing(void *pbuf, u64 *uval, int startbit, int endbit, size_t pbuflen, } EXPORT_SYMBOL(packing); +static u64 ustruct_field_to_u64(const void *ustruct, size_t field_offset, + size_t field_size) +{ + switch (field_size) { + case 1: + return *((u8 *)(ustruct + field_offset)); + case 2: + return *((u16 *)(ustruct + field_offset)); + case 4: + return *((u32 *)(ustruct + field_offset)); + default: + return *((u64 *)(ustruct + field_offset)); + } +} + +static void u64_to_ustruct_field(void *ustruct, size_t field_offset, + size_t field_size, u64 uval) +{ + switch (field_size) { + case 1: + *((u8 *)(ustruct + field_offset)) = uval; + break; + case 2: + *((u16 *)(ustruct + field_offset)) = uval; + break; + case 4: + *((u32 *)(ustruct + field_offset)) = uval; + break; + default: + *((u64 *)(ustruct + field_offset)) = uval; + break; + } +} + +/** + * pack_fields_s - Pack array of small fields + * + * @pbuf: Pointer to a buffer holding the packed value. + * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf. + * @ustruct: Pointer to CPU-readable structure holding the unpacked value. + * It is expected (but not checked) that this has the same data type + * as all struct packed_field_s definitions. + * @fields: Array of small packed fields definition. They must not overlap. + * @num_fields: Length of @fields array. + * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and + * QUIRK_MSB_ON_THE_RIGHT. + */ +void pack_fields_s(void *pbuf, size_t pbuflen, const void *ustruct, + const struct packed_field_s *fields, size_t num_fields, + u8 quirks) +{ + __pack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks); +} +EXPORT_SYMBOL(pack_fields_s); + +/** + * pack_fields_m - Pack array of medium fields + * + * @pbuf: Pointer to a buffer holding the packed value. + * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf. + * @ustruct: Pointer to CPU-readable structure holding the unpacked value. + * It is expected (but not checked) that this has the same data type + * as all struct packed_field_s definitions. + * @fields: Array of medium packed fields definition. They must not overlap. + * @num_fields: Length of @fields array. + * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and + * QUIRK_MSB_ON_THE_RIGHT. + */ +void pack_fields_m(void *pbuf, size_t pbuflen, const void *ustruct, + const struct packed_field_m *fields, size_t num_fields, + u8 quirks) +{ + __pack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks); +} +EXPORT_SYMBOL(pack_fields_m); + +/** + * unpack_fields_s - Unpack array of small fields + * + * @pbuf: Pointer to a buffer holding the packed value. + * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf. + * @ustruct: Pointer to CPU-readable structure holding the unpacked value. + * It is expected (but not checked) that this has the same data type + * as all struct packed_field_s definitions. + * @fields: Array of small packed fields definition. They must not overlap. + * @num_fields: Length of @fields array. + * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and + * QUIRK_MSB_ON_THE_RIGHT. + */ +void unpack_fields_s(const void *pbuf, size_t pbuflen, void *ustruct, + const struct packed_field_s *fields, size_t num_fields, + u8 quirks) +{ + __unpack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks); +} +EXPORT_SYMBOL(unpack_fields_s); + +/** + * unpack_fields_m - Unpack array of medium fields + * + * @pbuf: Pointer to a buffer holding the packed value. + * @pbuflen: The length in bytes of the packed buffer pointed to by @pbuf. + * @ustruct: Pointer to CPU-readable structure holding the unpacked value. + * It is expected (but not checked) that this has the same data type + * as all struct packed_field_s definitions. + * @fields: Array of medium packed fields definition. They must not overlap. + * @num_fields: Length of @fields array. + * @quirks: A bit mask of QUIRK_LITTLE_ENDIAN, QUIRK_LSW32_IS_FIRST and + * QUIRK_MSB_ON_THE_RIGHT. + */ +void unpack_fields_m(const void *pbuf, size_t pbuflen, void *ustruct, + const struct packed_field_m *fields, size_t num_fields, + u8 quirks) +{ + __unpack_fields(pbuf, pbuflen, ustruct, fields, num_fields, quirks); +} +EXPORT_SYMBOL(unpack_fields_m); + MODULE_DESCRIPTION("Generic bitfield packing and unpacking"); diff --git a/lib/packing_test.c b/lib/packing_test.c index b38ea43c03fd..3b4167ce56bf 100644 --- a/lib/packing_test.c +++ b/lib/packing_test.c @@ -396,9 +396,70 @@ static void packing_test_unpack(struct kunit *test) KUNIT_EXPECT_EQ(test, uval, params->uval); } +#define PACKED_BUF_SIZE 8 + +typedef struct __packed { u8 buf[PACKED_BUF_SIZE]; } packed_buf_t; + +struct test_data { + u32 field3; + u16 field2; + u16 field4; + u16 field6; + u8 field1; + u8 field5; +}; + +static const struct packed_field_s test_fields[] = { + PACKED_FIELD(63, 61, struct test_data, field1), + PACKED_FIELD(60, 52, struct test_data, field2), + PACKED_FIELD(51, 28, struct test_data, field3), + PACKED_FIELD(27, 14, struct test_data, field4), + PACKED_FIELD(13, 9, struct test_data, field5), + PACKED_FIELD(8, 0, struct test_data, field6), +}; + +static void packing_test_pack_fields(struct kunit *test) +{ + const struct test_data data = { + .field1 = 0x2, + .field2 = 0x100, + .field3 = 0xF00050, + .field4 = 0x7D3, + .field5 = 0x9, + .field6 = 0x10B, + }; + packed_buf_t expect = { + .buf = { 0x50, 0x0F, 0x00, 0x05, 0x01, 0xF4, 0xD3, 0x0B }, + }; + packed_buf_t buf = {}; + + pack_fields(&buf, sizeof(buf), &data, test_fields, 0); + + KUNIT_EXPECT_MEMEQ(test, &expect, &buf, sizeof(buf)); +} + +static void packing_test_unpack_fields(struct kunit *test) +{ + const packed_buf_t buf = { + .buf = { 0x17, 0x28, 0x10, 0x19, 0x3D, 0xA9, 0x07, 0x9C }, + }; + struct test_data data = {}; + + unpack_fields(&buf, sizeof(buf), &data, test_fields, 0); + + KUNIT_EXPECT_EQ(test, 0, data.field1); + KUNIT_EXPECT_EQ(test, 0x172, data.field2); + KUNIT_EXPECT_EQ(test, 0x810193, data.field3); + KUNIT_EXPECT_EQ(test, 0x36A4, data.field4); + KUNIT_EXPECT_EQ(test, 0x3, data.field5); + KUNIT_EXPECT_EQ(test, 0x19C, data.field6); +} + static struct kunit_case packing_test_cases[] = { KUNIT_CASE_PARAM(packing_test_pack, packing_gen_params), KUNIT_CASE_PARAM(packing_test_unpack, packing_gen_params), + KUNIT_CASE(packing_test_pack_fields), + KUNIT_CASE(packing_test_unpack_fields), {}, }; diff --git a/scripts/gen_packed_field_checks.c b/scripts/gen_packed_field_checks.c new file mode 100644 index 000000000000..09a21afd640b --- /dev/null +++ b/scripts/gen_packed_field_checks.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2024, Intel Corporation +#include +#include + +#define MAX_PACKED_FIELD_SIZE 50 + +int main(int argc, char **argv) +{ + for (int i = 1; i <= MAX_PACKED_FIELD_SIZE; i++) { + printf("#define CHECK_PACKED_FIELDS_%d(fields) ({ \\\n", i); + printf("\ttypeof(&(fields)[0]) _f = (fields); \\\n"); + printf("\tBUILD_BUG_ON(ARRAY_SIZE(fields) != %d); \\\n", i); + + for (int j = 0; j < i; j++) + printf("\tCHECK_PACKED_FIELD(_f[%d]); \\\n", j); + + for (int j = 1; j < i; j++) + printf("\tCHECK_PACKED_FIELD_OVERLAP(_f[0].startbit < _f[1].startbit, _f[%d], _f[%d]); \\\n", + j - 1, j); + + printf("})\n\n"); + } + + printf("#define CHECK_PACKED_FIELDS(fields) \\\n"); + + for (int i = 1; i <= MAX_PACKED_FIELD_SIZE; i++) + printf("\t__builtin_choose_expr(ARRAY_SIZE(fields) == %d, CHECK_PACKED_FIELDS_%d(fields), \\\n", + i, i); + + printf("\t({ BUILD_BUG_ON_MSG(1, \"CHECK_PACKED_FIELDS() must be regenerated to support array sizes larger than %d.\"); }) \\\n", + MAX_PACKED_FIELD_SIZE); + + for (int i = 1; i <= MAX_PACKED_FIELD_SIZE; i++) + printf(")"); + + printf("\n"); +} diff --git a/Documentation/core-api/packing.rst b/Documentation/core-api/packing.rst index 821691f23c54..5f729a9d4e87 100644 --- a/Documentation/core-api/packing.rst +++ b/Documentation/core-api/packing.rst @@ -235,3 +235,61 @@ programmer against incorrect API use. The errors are not expected to occur during runtime, therefore it is reasonable for xxx_packing() to return void and simply swallow those errors. Optionally it can dump stack or print the error description. + +The pack_fields() and unpack_fields() macros automatically select the +appropriate function at compile time based on the type of the fields array +passed in. + +Packed Fields +------------- + +Drivers are encouraged to use the ``pack_fields()`` and ``unpack_fields()`` +APIs over using ``pack()``, ``unpack()``, or ``packing()``. + +These APIs use field definitions in arrays of ``struct packed_field_s`` or +``struct packed_field_m`` stored as ``.rodata``. This significantly reduces +the code footprint required to pack or unpack many fields. In addition, +sanity checks on the field definitions are handled at compile time with +``BUILD_BUG_ON`` rather than only when the offending code is executed. + +It is recommended, but not required, that you wrap your packed buffer into a +structured type with a fixed size. This generally makes it easier for the +compiler to enforce that the correct size buffer is used. + +Here is an example of how to use the fields APIs: + +.. code-block:: c + + struct data { + u64 field3; + u32 field4; + u16 field1; + u8 field2; + }; + + #define SIZE 13 + + typdef struct __packed { u8 buf[SIZE]; } packed_buf_t; + + static const struct packed_field_s fields[] = { + PACKED_FIELD(100, 90, struct data, field1), + PACKED_FIELD(90, 87, struct data, field2), + PACKED_FIELD(86, 30, struct data, field3), + PACKED_FIELD(29, 0, struct data, field4), + }; + + void unpack_your_data(const packed_buf_t *buf, struct data *unpacked) + { + BUILD_BUG_ON(sizeof(*buf) != SIZE; + + unpack_fields(buf, sizeof(*buf), unpacked, fields, + QUIRK_LITTLE_ENDIAN); + } + + void pack_your_data(const struct data *unpacked, packed_buf_t *buf) + { + BUILD_BUG_ON(sizeof(*buf) != SIZE; + + pack_fields(buf, sizeof(*buf), unpacked, fields, + QUIRK_LITTLE_ENDIAN); + } diff --git a/MAINTAINERS b/MAINTAINERS index 96b9344c3524..45a3c1dca084 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -17448,8 +17448,10 @@ L: netdev@vger.kernel.org S: Supported F: Documentation/core-api/packing.rst F: include/linux/packing.h +F: include/linux/packing_types.h F: lib/packing.c F: lib/packing_test.c +F: scripts/gen_packed_field_checks.c PADATA PARALLEL EXECUTION MECHANISM M: Steffen Klassert diff --git a/scripts/Makefile b/scripts/Makefile index 6bcda4b9d054..546e8175e1c4 100644 --- a/scripts/Makefile +++ b/scripts/Makefile @@ -47,7 +47,7 @@ HOSTCFLAGS_sorttable.o += -DMCOUNT_SORT_ENABLED endif # The following programs are only built on demand -hostprogs += unifdef +hostprogs += unifdef gen_packed_field_checks # The module linker script is preprocessed on demand targets += module.lds From patchwork Tue Nov 19 00:23:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879231 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 265F3BE4F; Tue, 19 Nov 2024 00:23:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975829; cv=none; b=rMDemAP2IHhSdslPUJbaSCQFOitZrP0xmi1/tGBGCwjLl85PhiMP77a4AGXI3lFDmaid39BVwLCe1DvHzwDWf14bDIbjN/2naWcAiw+qLZSzq4RX63i3QzTv9H+uJhD2WkVnUMh0h0bXB1gKL9hXWKTqpGukDHBsbdO6pc/RG9s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975829; c=relaxed/simple; bh=smdCnKK5JCnXzm+UfTq+T7poKVvmdHV4Efr4cxIXJpQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NIqvrhwRIy6ogvVODHesskXds2TvyTyL/bLGuHQKfjw+xnBUKDPtuqzEsdlb9Du5nF760I1DB1RcyMCkIZ8eqTZhELOrZ8UzkwnAesEpDLaQVbhHscbt3VrpX0/fiigJffqUTtt0nsJ9zYACIBbcuYJQDvUs8wK7xemlS4Hfhe8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LQ0PHSPk; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LQ0PHSPk" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975828; x=1763511828; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=smdCnKK5JCnXzm+UfTq+T7poKVvmdHV4Efr4cxIXJpQ=; b=LQ0PHSPknddywBZgBab128LHKnJ/aEaUQ6lEbiUgAJ6j7gttcmNshsdv 3gymC2csRvT9EmnPwPLVamLwL1FnQZRpNtIAWeUBm9tgUvspSLdu4R6+D a3+RJxDtdstMSkh6pbH2IRinVZtPHpftiSrtc2UHXGpXvtobiWLuOwU6B N8MlTir0HMbxhn0GLhQr5M0WacpEaLk7iKBX00A6+Dnb8NBbfXKMWeCId aZ5Id+Wx2pXhYqAHpeKqxvZOHlfnUaE9bOO1+YHeX9hwxNxf3IlUDg8xT fLKCiTlAfZIuOAHNMFfDFq5EKX1O1FMO4TcyK6CMmmreJVCpTHvr9ZQUJ A==; X-CSE-ConnectionGUID: z9Puza5hRheX92SYuOH7cw== X-CSE-MsgGUID: MbQgghttQNeyeEYgfplWSg== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892433" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892433" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 X-CSE-ConnectionGUID: T3hc4Td8RVymWFZ++6YHVg== X-CSE-MsgGUID: U0VpSkoRS6yvWUDOx+Bu0Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162725" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:41 -0800 Subject: [PATCH net-next RFC v6 4/9] ice: remove int_q_state from ice_tlan_ctx Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-4-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The int_q_state field of the ice_tlan_ctx structure represents the internal queue state. However, we never actually need to assign this or read this during normal operation. In fact, trying to unpack it would not be possible as it is larger than a u64. Remove this field from the ice_tlan_ctx structure, and remove its packing field from the ice_tlan_ctx_info array. Signed-off-by: Jacob Keller Reviewed-by: Przemek Kitszel --- drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 1 - drivers/net/ethernet/intel/ice/ice_common.c | 1 - 2 files changed, 2 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index 611577ebc29d..0e8ed8c226e6 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -590,7 +590,6 @@ struct ice_tlan_ctx { u8 drop_ena; u8 cache_prof_idx; u8 pkt_shaper_prof_idx; - u8 int_q_state; /* width not needed - internal - DO NOT WRITE!!! */ }; #endif /* _ICE_LAN_TX_RX_H_ */ diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index b22e71dc59d4..0f5a80269a7b 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1467,7 +1467,6 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = { ICE_CTX_STORE(ice_tlan_ctx, drop_ena, 1, 165), ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx, 2, 166), ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx, 3, 168), - ICE_CTX_STORE(ice_tlan_ctx, int_q_state, 122, 171), { 0 } }; From patchwork Tue Nov 19 00:23:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879233 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 78AC21804E; Tue, 19 Nov 2024 00:23:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975831; cv=none; b=CpHTIw0CLgZN4BzhdmwdqPKaLOzGsu++kfReRIprFXpyK3U/KwqZ0HkYZvxEhGas3QqXryh95F5P/YX8eeoq0SOr2hf2bYKsfaz27hECpTHRAfcfiuyjr5/xvSTti9r2RCaVEQIpD81PvJICl+qhiwKklWBeafvN7+I4ceWcyH4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975831; c=relaxed/simple; bh=2fj+cEtnEQNK9lwkaTE8WJmRItkRZwIjxxBAcxZOLyI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=pCxxBuu9CzTlPggTycHFqkq0QodbhBC9JP5aseRJ2/CsxBce1a1PL7yY67ZaQOlUTOcajORWTe86Z4hv2CY/91fWOK6hsCKPcoANLRsJk0tWk7tkGgeN3uTcrvYOzcQ24gjnHZ6vGbwkT4Gr4XT+e8/XAVrEXcagAHUE3yTkbt8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=bOhZ5VU0; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bOhZ5VU0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975829; x=1763511829; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=2fj+cEtnEQNK9lwkaTE8WJmRItkRZwIjxxBAcxZOLyI=; b=bOhZ5VU03XL9sC/HN7DHRDsyOMkcz2Rw0jRyYz3okAEERHoagQV8BDWw VpXvGvnSRLSAxWqZR5diOoAqyHdciE3Z1lAwDjPUXkgyStDN66ZTmD/sK Pq+ZHt1TZRyaQs4EF+WyC2PdLZ7LuT6+Zt7bro9cKToVizyLwD11aJGrf R25jJO1cBgv+YGlNsDu34DeKkzs4aPyrKdrx62P/wmuKM8oGyXYHjfsPe 4CVTYW6NRPkY9hNCx+WlWiqoy2f8mnkBr9cV7Wrpbddrt3OkzONX5pY7b LYHViVcP0mxgk+6NkqJhbX1tdytTnDNoxv9lsMwhmoJrMWVRynFghev7r g==; X-CSE-ConnectionGUID: Gv9GjeCCTEaAwDxsSD4aFQ== X-CSE-MsgGUID: wWPJMGM2SF6Wg2QaB5o3Vw== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892450" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892450" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 X-CSE-ConnectionGUID: 78W/gaK2TJSSD8HaNAZD2A== X-CSE-MsgGUID: Hc3x6HniQ6mzws6/7SPgEw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162728" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:42 -0800 Subject: [PATCH net-next RFC v6 5/9] ice: use structures to keep track of queue context size Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-5-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller , Vladimir Oltean X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The ice Tx and Rx queue context are currently stored as arrays of bytes with defined size (ICE_RXQ_CTX_SZ and ICE_TXQ_CTX_SZ). The packed queue context is often passed to other functions as a simple u8 * pointer, which does not allow tracking the size. This makes the queue context API easy to misuse, as you can pass an arbitrary u8 array or pointer. Introduce wrapper typedefs which use a __packed structure that has the proper fixed size for the Tx and Rx context buffers. This enables the compiler to track the size of the value and ensures that passing the wrong buffer size will be detected by the compiler. The existing APIs do not benefit much from this change, however the wrapping structures will be used to simplify the arguments of new packing functions based on the recently introduced pack_fields API. Co-developed-by: Vladimir Oltean Signed-off-by: Vladimir Oltean Signed-off-by: Jacob Keller --- drivers/net/ethernet/intel/ice/ice_adminq_cmd.h | 11 +++++++++-- drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 2 -- drivers/net/ethernet/intel/ice/ice_base.c | 2 +- drivers/net/ethernet/intel/ice/ice_common.c | 24 +++++++++++------------- 4 files changed, 21 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h index 1489a8ceec51..3bf05b135b35 100644 --- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h +++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h @@ -12,6 +12,13 @@ #define ICE_AQC_TOPO_MAX_LEVEL_NUM 0x9 #define ICE_AQ_SET_MAC_FRAME_SIZE_MAX 9728 +#define ICE_RXQ_CTX_SIZE_DWORDS 8 +#define ICE_RXQ_CTX_SZ (ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32)) +#define ICE_TXQ_CTX_SZ 22 + +typedef struct __packed { u8 buf[ICE_RXQ_CTX_SZ]; } ice_rxq_ctx_buf_t; +typedef struct __packed { u8 buf[ICE_TXQ_CTX_SZ]; } ice_txq_ctx_buf_t; + struct ice_aqc_generic { __le32 param0; __le32 param1; @@ -2084,10 +2091,10 @@ struct ice_aqc_add_txqs_perq { __le16 txq_id; u8 rsvd[2]; __le32 q_teid; - u8 txq_ctx[22]; + ice_txq_ctx_buf_t txq_ctx; u8 rsvd2[2]; struct ice_aqc_txsched_elem info; -}; +} __packed; /* The format of the command buffer for Add Tx LAN Queues (0x0C30) * is an array of the following structs. Please note that the length of diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index 0e8ed8c226e6..a76e5b0e7861 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -371,8 +371,6 @@ enum ice_rx_flex_desc_status_error_1_bits { ICE_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */ }; -#define ICE_RXQ_CTX_SIZE_DWORDS 8 -#define ICE_RXQ_CTX_SZ (ICE_RXQ_CTX_SIZE_DWORDS * sizeof(u32)) #define ICE_TX_CMPLTNQ_CTX_SIZE_DWORDS 22 #define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS 5 #define GLTCLAN_CQ_CNTX(i, CQ) (GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800)) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 82a9cd4ec7ae..e7aaa0624121 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -910,7 +910,7 @@ ice_vsi_cfg_txq(struct ice_vsi *vsi, struct ice_tx_ring *ring, ice_setup_tx_ctx(ring, &tlan_ctx, pf_q); /* copy context contents into the qg_buf */ qg_buf->txqs[0].txq_id = cpu_to_le16(pf_q); - ice_set_ctx(hw, (u8 *)&tlan_ctx, qg_buf->txqs[0].txq_ctx, + ice_set_ctx(hw, (u8 *)&tlan_ctx, (u8 *)&qg_buf->txqs[0].txq_ctx, ice_tlan_ctx_info); /* init queue specific tail reg. It is referred as diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 0f5a80269a7b..48d95cb49864 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1359,29 +1359,27 @@ int ice_reset(struct ice_hw *hw, enum ice_reset_req req) /** * ice_copy_rxq_ctx_to_hw * @hw: pointer to the hardware structure - * @ice_rxq_ctx: pointer to the rxq context + * @rxq_ctx: pointer to the packed Rx queue context * @rxq_index: the index of the Rx queue * * Copies rxq context from dense structure to HW register space */ -static int -ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, u8 *ice_rxq_ctx, u32 rxq_index) +static int ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, + const ice_rxq_ctx_buf_t *rxq_ctx, + u32 rxq_index) { u8 i; - if (!ice_rxq_ctx) - return -EINVAL; - if (rxq_index > QRX_CTRL_MAX_INDEX) return -EINVAL; /* Copy each dword separately to HW */ for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) { - wr32(hw, QRX_CONTEXT(i, rxq_index), - *((u32 *)(ice_rxq_ctx + (i * sizeof(u32))))); + u32 ctx = ((const u32 *)rxq_ctx)[i]; - ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i, - *((u32 *)(ice_rxq_ctx + (i * sizeof(u32))))); + wr32(hw, QRX_CONTEXT(i, rxq_index), ctx); + + ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i, ctx); } return 0; @@ -1426,15 +1424,15 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = { int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, u32 rxq_index) { - u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 }; + ice_rxq_ctx_buf_t buf = {}; if (!rlan_ctx) return -EINVAL; rlan_ctx->prefena = 1; - ice_set_ctx(hw, (u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info); - return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index); + ice_set_ctx(hw, (u8 *)rlan_ctx, (u8 *)&buf, ice_rlan_ctx_info); + return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index); } /* LAN Tx Queue Context */ From patchwork Tue Nov 19 00:23:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879235 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C763A1A29A; Tue, 19 Nov 2024 00:23:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975832; cv=none; b=b5PtYrRLv7HAvHQWdGoSj6foDDq65XfIwvrFaJWXEIwxXxajsbFGLHV71LfMBl/8u0zaNXvkpWahe9xV/MkEZfp4L+xwa0AX8AGXZSco1EfrAdwIQmqxZIZclpDpBeZth8clP32xKUoqZnzd5QidpFzulKbzGo9jP5UVJvTPJqQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975832; c=relaxed/simple; bh=ULm9WTzsAFg/+AhJ0Q+uhk9Hj2fFgYChBy7/23LS9TI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=jwqjnW2sPnBpGwOStrvuCO7lJKQYx5D0Qj/wdVDWoO007fS+WO3OFlHrEjjwB6/to+93V4asd83vRQLOT7RjmjMvVFZIuCoT0+pQbiEoC1m2OPMve0wqjKfGUzCWB90NzcMiPPOG9ZKaiu+vSuNf/lZVKqi2iTKYCPAXE1YvFGc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=XrIMBgai; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XrIMBgai" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975830; x=1763511830; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=ULm9WTzsAFg/+AhJ0Q+uhk9Hj2fFgYChBy7/23LS9TI=; b=XrIMBgaiKr0Zd5PN/zTT8PpdvGhENCBuErpWljbnwc4CZ4wkCk5YklDh 76tnNLgBeJZZJur2bJkIu+w2SISNhIuZTi+Kck6xbn6FOxiOHtum9NddC 32dlVfbc1fIn24j8FF0pmTDVxRD3MFpQ9guSR1/HXe3lXQ0Z1qaYRFfF0 OpNv3oQy0nKzHX2Xk6Ezd4CMb2MprzwatlkL+Uol6dPghJaxFMPR3YTGD H7fyo69wKQlgJzS9qsou+hh2pY+83Ocww2DhB1uiM+PtQfFGyRbwuQbrR Tvc4UIiNZ1bZEJLphvyuAVmRyJf5nomqoVzZ8BITjb6XZeMFN1YZ1Zj75 w==; X-CSE-ConnectionGUID: noLy51yYQ/KjnGEoZ2MoRA== X-CSE-MsgGUID: 4jASa/f1TJW8KwjLEUH5gQ== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892455" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892455" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 X-CSE-ConnectionGUID: OGEzAmuLR124xr/ycoaHlA== X-CSE-MsgGUID: tBlqmZRsTx6Ukxg7/GyXNg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162731" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:43 -0800 Subject: [PATCH net-next RFC v6 6/9] ice: use for Tx and Rx queue context data Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-6-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The ice driver needs to write the Tx and Rx queue context when programming Tx and Rx queues. This is currently done using some bespoke custom logic via the ice_set_ctx() and its helper functions, along with bit position definitions in the ice_tlan_ctx_info and ice_rlan_ctx_info structures. This logic does work, but is problematic for several reasons: 1) ice_set_ctx requires a helper function for each byte size being packed, as it uses a separate function to pack u8, u16, u32, and u64 fields. This requires 4 functions which contain near-duplicate logic with the types changed out. 2) The logic in the ice_pack_ctx_word, ice_pack_ctx_dword, and ice_pack_ctx_qword does not handle values which straddle alignment boundaries very well. This requires that several fields in the ice_tlan_ctx_info and ice_rlan_ctx_info be a size larger than their bit size should require. 3) Future support for live migration will require adding unpacking functions to take the packed hardware context and unpack it into the ice_rlan_ctx and ice_tlan_ctx structures. Implementing this would require implementing ice_get_ctx, and its associated helper functions, which essentially doubles the amount of code required. The Linux kernel has had a packing library that can handle this logic since commit 554aae35007e ("lib: Add support for generic packing operations"). The library was recently extended with support for packing or unpacking an array of fields, with a similar structure as the ice_ctx_ele structure. Replace the ice-specific ice_set_ctx() logic with the recently added pack_fields and packed_field_s infrastructure from For API simplicity, the Tx and Rx queue context are programmed using separate ice_pack_txq_ctx() and ice_pack_rxq_ctx(). This avoids needing to export the packed_field_s arrays. The functions can pointers to the appropriate ice_txq_ctx_buf_t and ice_rxq_ctx_buf_t types, ensuring that only buffers of the appropriate size are passed. Signed-off-by: Jacob Keller --- drivers/net/ethernet/intel/ice/ice_common.h | 5 +- drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 14 -- drivers/net/ethernet/intel/ice/ice_base.c | 3 +- drivers/net/ethernet/intel/ice/ice_common.c | 243 ++++--------------------- drivers/net/ethernet/intel/Kconfig | 1 + 5 files changed, 42 insertions(+), 224 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h index 27208a60cece..a68bea3934e3 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.h +++ b/drivers/net/ethernet/intel/ice/ice_common.h @@ -92,9 +92,8 @@ ice_aq_set_rss_key(struct ice_hw *hw, u16 vsi_handle, bool ice_check_sq_alive(struct ice_hw *hw, struct ice_ctl_q_info *cq); int ice_aq_q_shutdown(struct ice_hw *hw, bool unloading); void ice_fill_dflt_direct_cmd_desc(struct ice_aq_desc *desc, u16 opcode); -extern const struct ice_ctx_ele ice_tlan_ctx_info[]; -int ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx, - const struct ice_ctx_ele *ce_info); + +void ice_pack_txq_ctx(const struct ice_tlan_ctx *ctx, ice_txq_ctx_buf_t *buf); extern struct mutex ice_global_cfg_lock_sw; diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index a76e5b0e7861..31d4a445d640 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -408,20 +408,6 @@ struct ice_rlan_ctx { u8 prefena; /* NOTE: normally must be set to 1 at init */ }; -struct ice_ctx_ele { - u16 offset; - u16 size_of; - u16 width; - u16 lsb; -}; - -#define ICE_CTX_STORE(_struct, _ele, _width, _lsb) { \ - .offset = offsetof(struct _struct, _ele), \ - .size_of = sizeof_field(struct _struct, _ele), \ - .width = _width, \ - .lsb = _lsb, \ -} - /* for hsplit_0 field of Rx RLAN context */ enum ice_rlan_ctx_rx_hsplit_0 { ICE_RLAN_RX_HSPLIT_0_NO_SPLIT = 0, diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index e7aaa0624121..5fe7b5a10020 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -910,8 +910,7 @@ ice_vsi_cfg_txq(struct ice_vsi *vsi, struct ice_tx_ring *ring, ice_setup_tx_ctx(ring, &tlan_ctx, pf_q); /* copy context contents into the qg_buf */ qg_buf->txqs[0].txq_id = cpu_to_le16(pf_q); - ice_set_ctx(hw, (u8 *)&tlan_ctx, (u8 *)&qg_buf->txqs[0].txq_ctx, - ice_tlan_ctx_info); + ice_pack_txq_ctx(&tlan_ctx, &qg_buf->txqs[0].txq_ctx); /* init queue specific tail reg. It is referred as * transmit comm scheduler queue doorbell. diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 48d95cb49864..1b013c9c9378 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -6,6 +6,7 @@ #include "ice_adminq_cmd.h" #include "ice_flow.h" #include "ice_ptp_hw.h" +#include #define ICE_PF_RESET_WAIT_COUNT 300 #define ICE_MAX_NETLIST_SIZE 10 @@ -1385,9 +1386,12 @@ static int ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, return 0; } +#define ICE_CTX_STORE(struct_name, struct_field, width, lsb) \ + PACKED_FIELD((lsb) + (width) - 1, (lsb), struct struct_name, struct_field) + /* LAN Rx Queue Context */ -static const struct ice_ctx_ele ice_rlan_ctx_info[] = { - /* Field Width LSB */ +static const struct packed_field_s ice_rlan_ctx_fields[] = { + /* Field Width LSB */ ICE_CTX_STORE(ice_rlan_ctx, head, 13, 0), ICE_CTX_STORE(ice_rlan_ctx, cpuid, 8, 13), ICE_CTX_STORE(ice_rlan_ctx, base, 57, 32), @@ -1408,9 +1412,23 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = { ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena, 1, 196), ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh, 3, 198), ICE_CTX_STORE(ice_rlan_ctx, prefena, 1, 201), - { 0 } }; +/** + * ice_pack_rxq_ctx - Pack Rx queue context into a HW buffer + * @ctx: the Rx queue context to pack + * @buf: the HW buffer to pack into + * + * Pack the Rx queue context from the CPU-friendly unpacked buffer into its + * bit-packed HW layout. + */ +static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx, + ice_rxq_ctx_buf_t *buf) +{ + pack_fields(buf, sizeof(*buf), ctx, ice_rlan_ctx_fields, + QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST); +} + /** * ice_write_rxq_ctx * @hw: pointer to the hardware structure @@ -1431,12 +1449,13 @@ int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, rlan_ctx->prefena = 1; - ice_set_ctx(hw, (u8 *)rlan_ctx, (u8 *)&buf, ice_rlan_ctx_info); + ice_pack_rxq_ctx(rlan_ctx, &buf); + return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index); } /* LAN Tx Queue Context */ -const struct ice_ctx_ele ice_tlan_ctx_info[] = { +static const struct packed_field_s ice_tlan_ctx_fields[] = { /* Field Width LSB */ ICE_CTX_STORE(ice_tlan_ctx, base, 57, 0), ICE_CTX_STORE(ice_tlan_ctx, port_num, 3, 57), @@ -1465,9 +1484,22 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = { ICE_CTX_STORE(ice_tlan_ctx, drop_ena, 1, 165), ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx, 2, 166), ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx, 3, 168), - { 0 } }; +/** + * ice_pack_txq_ctx - Pack Tx queue context into a HW buffer + * @ctx: the Tx queue context to pack + * @buf: the HW buffer to pack into + * + * Pack the Tx queue context from the CPU-friendly unpacked buffer into its + * bit-packed HW layout. + */ +void ice_pack_txq_ctx(const struct ice_tlan_ctx *ctx, ice_txq_ctx_buf_t *buf) +{ + pack_fields(buf, sizeof(*buf), ctx, ice_tlan_ctx_fields, + QUIRK_LITTLE_ENDIAN | QUIRK_LSW32_IS_FIRST); +} + /* Sideband Queue command wrappers */ /** @@ -4545,205 +4577,6 @@ ice_aq_add_rdma_qsets(struct ice_hw *hw, u8 num_qset_grps, /* End of FW Admin Queue command wrappers */ -/** - * ice_pack_ctx_byte - write a byte to a packed context structure - * @src_ctx: unpacked source context structure - * @dest_ctx: packed destination context data - * @ce_info: context element description - */ -static void ice_pack_ctx_byte(u8 *src_ctx, u8 *dest_ctx, - const struct ice_ctx_ele *ce_info) -{ - u8 src_byte, dest_byte, mask; - u8 *from, *dest; - u16 shift_width; - - /* copy from the next struct field */ - from = src_ctx + ce_info->offset; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - mask = GENMASK(ce_info->width - 1 + shift_width, shift_width); - - src_byte = *from; - src_byte <<= shift_width; - src_byte &= mask; - - /* get the current bits from the target bit string */ - dest = dest_ctx + (ce_info->lsb / 8); - - memcpy(&dest_byte, dest, sizeof(dest_byte)); - - dest_byte &= ~mask; /* get the bits not changing */ - dest_byte |= src_byte; /* add in the new bits */ - - /* put it all back */ - memcpy(dest, &dest_byte, sizeof(dest_byte)); -} - -/** - * ice_pack_ctx_word - write a word to a packed context structure - * @src_ctx: unpacked source context structure - * @dest_ctx: packed destination context data - * @ce_info: context element description - */ -static void ice_pack_ctx_word(u8 *src_ctx, u8 *dest_ctx, - const struct ice_ctx_ele *ce_info) -{ - u16 src_word, mask; - __le16 dest_word; - u8 *from, *dest; - u16 shift_width; - - /* copy from the next struct field */ - from = src_ctx + ce_info->offset; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - mask = GENMASK(ce_info->width - 1 + shift_width, shift_width); - - /* don't swizzle the bits until after the mask because the mask bits - * will be in a different bit position on big endian machines - */ - src_word = *(u16 *)from; - src_word <<= shift_width; - src_word &= mask; - - /* get the current bits from the target bit string */ - dest = dest_ctx + (ce_info->lsb / 8); - - memcpy(&dest_word, dest, sizeof(dest_word)); - - dest_word &= ~(cpu_to_le16(mask)); /* get the bits not changing */ - dest_word |= cpu_to_le16(src_word); /* add in the new bits */ - - /* put it all back */ - memcpy(dest, &dest_word, sizeof(dest_word)); -} - -/** - * ice_pack_ctx_dword - write a dword to a packed context structure - * @src_ctx: unpacked source context structure - * @dest_ctx: packed destination context data - * @ce_info: context element description - */ -static void ice_pack_ctx_dword(u8 *src_ctx, u8 *dest_ctx, - const struct ice_ctx_ele *ce_info) -{ - u32 src_dword, mask; - __le32 dest_dword; - u8 *from, *dest; - u16 shift_width; - - /* copy from the next struct field */ - from = src_ctx + ce_info->offset; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - mask = GENMASK(ce_info->width - 1 + shift_width, shift_width); - - /* don't swizzle the bits until after the mask because the mask bits - * will be in a different bit position on big endian machines - */ - src_dword = *(u32 *)from; - src_dword <<= shift_width; - src_dword &= mask; - - /* get the current bits from the target bit string */ - dest = dest_ctx + (ce_info->lsb / 8); - - memcpy(&dest_dword, dest, sizeof(dest_dword)); - - dest_dword &= ~(cpu_to_le32(mask)); /* get the bits not changing */ - dest_dword |= cpu_to_le32(src_dword); /* add in the new bits */ - - /* put it all back */ - memcpy(dest, &dest_dword, sizeof(dest_dword)); -} - -/** - * ice_pack_ctx_qword - write a qword to a packed context structure - * @src_ctx: unpacked source context structure - * @dest_ctx: packed destination context data - * @ce_info: context element description - */ -static void ice_pack_ctx_qword(u8 *src_ctx, u8 *dest_ctx, - const struct ice_ctx_ele *ce_info) -{ - u64 src_qword, mask; - __le64 dest_qword; - u8 *from, *dest; - u16 shift_width; - - /* copy from the next struct field */ - from = src_ctx + ce_info->offset; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - mask = GENMASK_ULL(ce_info->width - 1 + shift_width, shift_width); - - /* don't swizzle the bits until after the mask because the mask bits - * will be in a different bit position on big endian machines - */ - src_qword = *(u64 *)from; - src_qword <<= shift_width; - src_qword &= mask; - - /* get the current bits from the target bit string */ - dest = dest_ctx + (ce_info->lsb / 8); - - memcpy(&dest_qword, dest, sizeof(dest_qword)); - - dest_qword &= ~(cpu_to_le64(mask)); /* get the bits not changing */ - dest_qword |= cpu_to_le64(src_qword); /* add in the new bits */ - - /* put it all back */ - memcpy(dest, &dest_qword, sizeof(dest_qword)); -} - -/** - * ice_set_ctx - set context bits in packed structure - * @hw: pointer to the hardware structure - * @src_ctx: pointer to a generic non-packed context structure - * @dest_ctx: pointer to memory for the packed structure - * @ce_info: List of Rx context elements - */ -int ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx, - const struct ice_ctx_ele *ce_info) -{ - int f; - - for (f = 0; ce_info[f].width; f++) { - /* We have to deal with each element of the FW response - * using the correct size so that we are correct regardless - * of the endianness of the machine. - */ - if (ce_info[f].width > (ce_info[f].size_of * BITS_PER_BYTE)) { - ice_debug(hw, ICE_DBG_QCTX, "Field %d width of %d bits larger than size of %d byte(s) ... skipping write\n", - f, ce_info[f].width, ce_info[f].size_of); - continue; - } - switch (ce_info[f].size_of) { - case sizeof(u8): - ice_pack_ctx_byte(src_ctx, dest_ctx, &ce_info[f]); - break; - case sizeof(u16): - ice_pack_ctx_word(src_ctx, dest_ctx, &ce_info[f]); - break; - case sizeof(u32): - ice_pack_ctx_dword(src_ctx, dest_ctx, &ce_info[f]); - break; - case sizeof(u64): - ice_pack_ctx_qword(src_ctx, dest_ctx, &ce_info[f]); - break; - default: - return -EINVAL; - } - } - - return 0; -} - /** * ice_get_lan_q_ctx - get the LAN queue context for the given VSI and TC * @hw: pointer to the HW struct diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 20bc40eec487..24ec9a4f1ffa 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -292,6 +292,7 @@ config ICE select DIMLIB select LIBIE select NET_DEVLINK + select PACKING select PLDMFW select DPLL help From patchwork Tue Nov 19 00:23:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879234 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3091E1D554; Tue, 19 Nov 2024 00:23:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975831; cv=none; b=iNb7FmOsj4iRUJRTGqudZSoz3poP6+iba5Vh6tqCIGUlpA4Js6P8vDVVSE4a0pRVeRJ6ZO6UG/hUVQqiQTynlfnjl2/ptRmm8D+KHFDjS2SuIRsePvWlPdBBaNSOLZ/W6tG5sqjbAmy3BGm5NsirbXmA6dZ4y5XkTvBICNHOB0w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975831; c=relaxed/simple; bh=IkQQOUz/aBPkDtC9Ll8oIZ+4rI/xVK2GKUDT9P+gP2Y=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=SNstlCwC4E0GVzJIzIZKSjmtDFbsbsBHiiK1hyDRJ50b7wKeYPtI71P0fE8hphcG+VEneXa//OPzljy7XsLWxcDs7XsiI9NMhOBWlQSH8A0QNQOUSwt8n1Hp4mzUs21d1bsSKS+AMCceTpNtGtjcvznBtuVVN3FzC7QPZDUJ9/Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=LKC3fzWA; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="LKC3fzWA" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975830; x=1763511830; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=IkQQOUz/aBPkDtC9Ll8oIZ+4rI/xVK2GKUDT9P+gP2Y=; b=LKC3fzWAX/rN3qFmgdDAV6xzKqSHNKUR4V2TQW7rMwpF0P8dDdtlwkim X3lAK+GgRofdCeD7UKyS7dcvzXpOpVDcFTNiooLoeqquXTmlC4idxs7Cf E1pd7SR0ihD5M/XTuwI85g/e26OckfjhAyekjTZ8xKww6E3NfQUKPJwM9 c4Kc+T0K+RhV8jmWlTq7fa8/UpRiyg/aNW8KJpscJlztslQIeNk4kW9EJ ooaLyfL5PYGQDZyFagu3zpRKxVufCDv3DQXOXEOyzV/a2xASwBmSbcX1L s7oIDKTL+C9uZBcgT5+RtoF26OEZ/qQKARGM9Tvm9R79a96tmtG33SUYn g==; X-CSE-ConnectionGUID: NJoVkmTDR4CiBn+A1oSLDw== X-CSE-MsgGUID: VdiRZRxDR4Kmktu5uP1XeA== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892459" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892459" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 X-CSE-ConnectionGUID: 8NejfBFJSlCu2tsmxv0zHA== X-CSE-MsgGUID: 5zUhhrYYSiGPGoeORTBfSQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162734" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:46 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:44 -0800 Subject: [PATCH net-next RFC v6 7/9] ice: reduce size of queue context fields Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-7-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The ice_rlan_ctx and ice_tlan_ctx structures have some fields which are intentionally sized larger than necessary relative to the packed sizes the data must fit into. This was done because the original ice_set_ctx() function and its helpers did not correctly handle packing when the packed bits straddled a byte. This is no longer the case with the use of the implementation. Save some bytes in these structures by sizing the variables to the number of bytes the actual bitpacked fields fit into. There are a couple of gaps left in the structure, which is a result of the fields being in the order they appear in the packed bit layout, but where alignment forces some extra gaps. We could fix this, saving ~8 bytes from each structure. However, these structures are not used heavily, and the resulting savings is minimal: $ bloat-o-meter ice-before-reorder.ko ice-after-reorder.ko add/remove: 0/0 grow/shrink: 1/1 up/down: 26/-70 (-44) Function old new delta ice_vsi_cfg_txq 1873 1899 +26 ice_setup_rx_ctx.constprop 1529 1459 -70 Total: Before=1459555, After=1459511, chg -0.00% Thus, the fields are left in the same order as the packed bit layout, despite the gaps this causes. Signed-off-by: Jacob Keller Reviewed-by: Przemek Kitszel --- drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 32 ++++++++------------------ 1 file changed, 10 insertions(+), 22 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h index 31d4a445d640..1479b45738af 100644 --- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h +++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h @@ -375,23 +375,17 @@ enum ice_rx_flex_desc_status_error_1_bits { #define ICE_TX_DRBELL_Q_CTX_SIZE_DWORDS 5 #define GLTCLAN_CQ_CNTX(i, CQ) (GLTCLAN_CQ_CNTX0(CQ) + ((i) * 0x0800)) -/* RLAN Rx queue context data - * - * The sizes of the variables may be larger than needed due to crossing byte - * boundaries. If we do not have the width of the variable set to the correct - * size then we could end up shifting bits off the top of the variable when the - * variable is at the top of a byte and crosses over into the next byte. - */ +/* RLAN Rx queue context data */ struct ice_rlan_ctx { u16 head; - u16 cpuid; /* bigger than needed, see above for reason */ + u8 cpuid; #define ICE_RLAN_BASE_S 7 u64 base; u16 qlen; #define ICE_RLAN_CTX_DBUF_S 7 - u16 dbuf; /* bigger than needed, see above for reason */ + u8 dbuf; #define ICE_RLAN_CTX_HBUF_S 6 - u16 hbuf; /* bigger than needed, see above for reason */ + u8 hbuf; u8 dtype; u8 dsize; u8 crcstrip; @@ -399,12 +393,12 @@ struct ice_rlan_ctx { u8 hsplit_0; u8 hsplit_1; u8 showiv; - u32 rxmax; /* bigger than needed, see above for reason */ + u16 rxmax; u8 tphrdesc_ena; u8 tphwdesc_ena; u8 tphdata_ena; u8 tphhead_ena; - u16 lrxqthresh; /* bigger than needed, see above for reason */ + u8 lrxqthresh; u8 prefena; /* NOTE: normally must be set to 1 at init */ }; @@ -535,18 +529,12 @@ enum ice_tx_ctx_desc_eipt_offload { #define ICE_LAN_TXQ_MAX_QGRPS 127 #define ICE_LAN_TXQ_MAX_QDIS 1023 -/* Tx queue context data - * - * The sizes of the variables may be larger than needed due to crossing byte - * boundaries. If we do not have the width of the variable set to the correct - * size then we could end up shifting bits off the top of the variable when the - * variable is at the top of a byte and crosses over into the next byte. - */ +/* Tx queue context data */ struct ice_tlan_ctx { #define ICE_TLAN_CTX_BASE_S 7 u64 base; /* base is defined in 128-byte units */ u8 port_num; - u16 cgd_num; /* bigger than needed, see above for reason */ + u8 cgd_num; u8 pf_num; u16 vmvf_num; u8 vmvf_type; @@ -557,7 +545,7 @@ struct ice_tlan_ctx { u8 tsyn_ena; u8 internal_usage_flag; u8 alt_vlan; - u16 cpuid; /* bigger than needed, see above for reason */ + u8 cpuid; u8 wb_mode; u8 tphrd_desc; u8 tphrd; @@ -566,7 +554,7 @@ struct ice_tlan_ctx { u16 qnum_in_func; u8 itr_notification_mode; u8 adjust_prof_id; - u32 qlen; /* bigger than needed, see above for reason */ + u16 qlen; u8 quanta_prof_idx; u8 tso_ena; u16 tso_qnum; From patchwork Tue Nov 19 00:23:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879236 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 084E938FA3; Tue, 19 Nov 2024 00:23:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975833; cv=none; b=GNdxY5ad3RVtZfrtJ2dtmaK7Mdrm2TbxMLaPCeTb2NOMwtQhc6/JJgajCiUFMrDCNkDCKEdMAHg9C+ZuZ1drVoV69Mf5qr8sUsvWq3dgLri52fIwzxSPXYCmzQNatUFWSpfD7G3cKyheejv/reqxiuOAf9DolLg823p8ne1Qldw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975833; c=relaxed/simple; bh=qfh2z/3jWRjbzt9P+/s3k9NIdplcKnErwcLpc6EWwvo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=dvOZaSchZ58RX7JONBaFkRnfnTW9Sf/rpxgCMe7DOKI2rRMsOVjt+keLi+lU9r2sQVF9xlpKjNHdzaH/7aKj6r7Qo7artmpOO+GEzE4xWZahZS/6bbgBin209lf3Nw8uyPlL0/FDWV4vGkjlOnrGOTv/8tyfkmQr3Ufe5B7Jceo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=KDSMJcMg; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KDSMJcMg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975832; x=1763511832; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=qfh2z/3jWRjbzt9P+/s3k9NIdplcKnErwcLpc6EWwvo=; b=KDSMJcMgL9k9+hxhJrBDsV0/bEKuImDvsxtu9Qoy1h0g9NXmxBkrGvFm R6W8IRtlVHrUtSoWU/GJITg4vRZeyScKVVP+XVqEcTUBPgv0cW2puT/3W mc6SH/1ZWxKF8bQ/QF3xqOhzO5F5sxam+AqfJjJe//xe579PjJ4mPEY0k +DNGJE/K2LfnOM//v2jmQkZacL6GNG+JW/fHo3uxtePJgKY49Emihnfbu 6JhBNiDuKkt4EQ5i0GSBVBqAs6KiwO3W+hLOnYX1DEfOPA0aWyxywAoHp aR6mGRnS9d5vEdzrwRiqKBexp5oISsEfbzlCR67fObDP+V21NBTyCGCX1 A==; X-CSE-ConnectionGUID: XB6yxC8ZRW2CqUr/UA7SuA== X-CSE-MsgGUID: ekB59djWQQCEEKcmEWa7xA== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892471" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892471" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:47 -0800 X-CSE-ConnectionGUID: 6n27/F+gQKC9WJ5RMjM/uw== X-CSE-MsgGUID: IA/9HHbZS+qEnO8dOc68RQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162737" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:47 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:45 -0800 Subject: [PATCH net-next RFC v6 8/9] ice: move prefetch enable to ice_setup_rx_ctx Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-8-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The ice_write_rxq_ctx() function is responsible for programming the Rx Queue context into hardware. It receives the configuration in unpacked form via the ice_rlan_ctx structure. This function unconditionally modifies the context to set the prefetch enable bit. This was done by commit c31a5c25bb19 ("ice: Always set prefena when configuring an Rx queue"). Setting this bit makes sense, since prefetching descriptors is almost always the preferred behavior. However, the ice_write_rxq_ctx() function is not the place that actually defines the queue context. We initialize the Rx Queue context in ice_setup_rx_ctx(). It is surprising to have the Rx queue context changed by a function who's responsibility is to program the given context to hardware. Following the principle of least surprise, move the setting of the prefetch enable bit out of ice_write_rxq_ctx() and into the ice_setup_rx_ctx(). Signed-off-by: Jacob Keller Reviewed-by: Przemek Kitszel --- drivers/net/ethernet/intel/ice/ice_base.c | 3 +++ drivers/net/ethernet/intel/ice/ice_common.c | 9 +++------ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 5fe7b5a10020..b2af8e3586f7 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -454,6 +454,9 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring) /* Rx queue threshold in units of 64 */ rlan_ctx.lrxqthresh = 1; + /* Enable descriptor prefetch */ + rlan_ctx.prefena = 1; + /* PF acts as uplink for switchdev; set flex descriptor with src_vsi * metadata and flags to allow redirecting to PR netdev */ diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 1b013c9c9378..379040593d97 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1430,14 +1430,13 @@ static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx, } /** - * ice_write_rxq_ctx + * ice_write_rxq_ctx - Write Rx Queue context to hardware * @hw: pointer to the hardware structure * @rlan_ctx: pointer to the rxq context * @rxq_index: the index of the Rx queue * - * Converts rxq context from sparse to dense structure and then writes - * it to HW register space and enables the hardware to prefetch descriptors - * instead of only fetching them on demand + * Pack the sparse Rx Queue context into dense hardware format and write it + * into the HW register space. */ int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, u32 rxq_index) @@ -1447,8 +1446,6 @@ int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, if (!rlan_ctx) return -EINVAL; - rlan_ctx->prefena = 1; - ice_pack_rxq_ctx(rlan_ctx, &buf); return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index); From patchwork Tue Nov 19 00:23:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jacob Keller X-Patchwork-Id: 13879237 X-Patchwork-Delegate: kuba@kernel.org Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F19B3E49D; Tue, 19 Nov 2024 00:23:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.21 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975833; cv=none; b=ZWhJ70X4t3O/LvsgvXT3PYZWALRWvX8Oq/cOgh8m0uu0PQJVHiZwDl257T8TJ61y/TVIGla1fHtpKjqkuTVNcmb8UVh9Aq0h42f2va6jL1Sk28q5elFPkk600nzw/bGsKF1rxvcEuSBAbJNVFzQ4vrN4sJruYVUK0c9v7dAe1Rw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731975833; c=relaxed/simple; bh=eDH0YtdPY9lNzXHhNRnir1NCPgFipa/sqRvlqfFZG7o=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=G9yb6DTLATq+wKycba2tuk37uDmK0MdxRFGxuaVq+xwLJDHGOuC6vKL4jgV78Vi766QSkTHJhirfm2vXdULS951TI5KNR7bJhsU1blOX590OID2ztxWdJtYschXS04PTmhKB6L0tiVNDpceS0YLfeE0AJTkEFhiTvDUNECRzPtU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eG/9Ybdc; arc=none smtp.client-ip=198.175.65.21 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eG/9Ybdc" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1731975832; x=1763511832; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=eDH0YtdPY9lNzXHhNRnir1NCPgFipa/sqRvlqfFZG7o=; b=eG/9Ybdcdbs2C2zArOapIsKINiGeA4PXv9VlhMJdJIa2RRkPC77GOO6F yZ8MXbDSJFLi8aOMCWvW1tgFUci8CZasFLnv/29t6MdwwtOHyJ5v5ahuZ 1Qet14LRjLmekb/GieKkXdsB5u6l7CPqodmI75eX8qUhSNa9zWBUZyumO AKRyJC4eAzt8+y66WITguxFpyX9NoX9pw7HW+PfQzPaZiESCKWnvbQYfd QXmetEnNhI9w0WHOc2yzHpC3FSkaUzuGx8GHlOekJV3xfq2+vlM92/JD6 wj63WGP3HzedBTO31DP5lgB4tI7bVCZE6gCOplLZGfvQcPW5g7eDsulRi A==; X-CSE-ConnectionGUID: wjFFaf2NRP+1JZXmvHoCCw== X-CSE-MsgGUID: B91+BxLHQkKpchkG8FSh1Q== X-IronPort-AV: E=McAfee;i="6700,10204,11260"; a="31892474" X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="31892474" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:47 -0800 X-CSE-ConnectionGUID: eIzGDYvwSJmbTZ45D4+Dbg== X-CSE-MsgGUID: XRphpFvNTmuG/zfsrtUm8A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,165,1728975600"; d="scan'208";a="89162740" Received: from jekeller-desk.jf.intel.com ([10.166.241.20]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Nov 2024 16:23:47 -0800 From: Jacob Keller Date: Mon, 18 Nov 2024 16:23:46 -0800 Subject: [PATCH net-next RFC v6 9/9] ice: cleanup Rx queue context programming functions Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241118-packing-pack-fields-and-ice-implementation-v6-9-6af8b658a6c3@intel.com> References: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> In-Reply-To: <20241118-packing-pack-fields-and-ice-implementation-v6-0-6af8b658a6c3@intel.com> To: Vladimir Oltean , Andrew Morton , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tony Nguyen , Przemek Kitszel , Masahiro Yamada , netdev Cc: linux-kbuild@vger.kernel.org, Jacob Keller X-Mailer: b4 0.14.1 X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC The ice_copy_rxq_ctx_to_hw() and ice_write_rxq_ctx() functions perform some defensive checks which are typically frowned upon by kernel style guidelines. In particular, NULL checks on buffers which point to the stack are discouraged, especially when the functions are static and only called once. Checks of this sort only serve to hide potential programming error, as we will not produce the normal crash dump on a NULL access. In addition, ice_copy_rxq_ctx_to_hw() cannot fail in another way, so could be made void. Future support for VF Live Migration will need to introduce an inverse function for reading Rx queue context from HW registers to unpack it, as well as functions to pack and unpack Tx queue context from HW. Rather than copying these style issues into the new functions, lets first cleanup the existing code. For the ice_copy_rxq_ctx_to_hw() function: * Move the Rx queue index check out of this function. * Convert the function to a void return. * Use a simple int variable instead of a u8 for the for loop index, and initialize it inside the for loop. * Update the function description to better align with kernel doc style. For the ice_write_rxq_ctx() function: * Move the Rx queue index check into this function. * Update the function description with a Returns: to align with kernel doc style. These changes align the existing write functions to current kernel style, and will align with the style of the new functions added when we implement live migration in a future series. Signed-off-by: Jacob Keller --- drivers/net/ethernet/intel/ice/ice_common.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 379040593d97..6c6862beab6a 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -1358,32 +1358,23 @@ int ice_reset(struct ice_hw *hw, enum ice_reset_req req) } /** - * ice_copy_rxq_ctx_to_hw + * ice_copy_rxq_ctx_to_hw - Copy packed Rx queue context to HW registers * @hw: pointer to the hardware structure * @rxq_ctx: pointer to the packed Rx queue context * @rxq_index: the index of the Rx queue - * - * Copies rxq context from dense structure to HW register space */ -static int ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, - const ice_rxq_ctx_buf_t *rxq_ctx, - u32 rxq_index) +static void ice_copy_rxq_ctx_to_hw(struct ice_hw *hw, + const ice_rxq_ctx_buf_t *rxq_ctx, + u32 rxq_index) { - u8 i; - - if (rxq_index > QRX_CTRL_MAX_INDEX) - return -EINVAL; - /* Copy each dword separately to HW */ - for (i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) { + for (int i = 0; i < ICE_RXQ_CTX_SIZE_DWORDS; i++) { u32 ctx = ((const u32 *)rxq_ctx)[i]; wr32(hw, QRX_CONTEXT(i, rxq_index), ctx); ice_debug(hw, ICE_DBG_QCTX, "qrxdata[%d]: %08X\n", i, ctx); } - - return 0; } #define ICE_CTX_STORE(struct_name, struct_field, width, lsb) \ @@ -1432,23 +1423,26 @@ static void ice_pack_rxq_ctx(const struct ice_rlan_ctx *ctx, /** * ice_write_rxq_ctx - Write Rx Queue context to hardware * @hw: pointer to the hardware structure - * @rlan_ctx: pointer to the rxq context + * @rlan_ctx: pointer to the unpacked Rx queue context * @rxq_index: the index of the Rx queue * * Pack the sparse Rx Queue context into dense hardware format and write it * into the HW register space. + * + * Return: 0 on success, or -EINVAL if the Rx queue index is invalid. */ int ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, u32 rxq_index) { ice_rxq_ctx_buf_t buf = {}; - if (!rlan_ctx) + if (rxq_index > QRX_CTRL_MAX_INDEX) return -EINVAL; ice_pack_rxq_ctx(rlan_ctx, &buf); + ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index); - return ice_copy_rxq_ctx_to_hw(hw, &buf, rxq_index); + return 0; } /* LAN Tx Queue Context */