From patchwork Wed Dec 6 20:59:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Larysa Zaremba X-Patchwork-Id: 13482260 X-Patchwork-Delegate: bpf@iogearbox.net Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hKSkNV42" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33A18D68; Wed, 6 Dec 2023 13:01:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701896464; x=1733432464; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kViqL5iu7m0sHw4/e5BszHWWKsyV2Qyplp/yjTRAfUM=; b=hKSkNV42zs0LGgtB3P0Iux9/QhoiGTeY91ZVZ0L+5YBBVqB2HXSiEamj SDyjLEv4NV/kAzSgB6143f9C/87agPEp9txfRCF8u9tF8NyUfkkXZz61+ zLnT14aOw5R/2vah0Z+cUaESCdii9g/8XNKHBh50cPu4iXJ+6ak3uOTiB U05lCR0AHdUmpsRw7yGWc1ltJhErGoa5EfbneL3cCtE6arYk/oE2XE3BI LvlGPGdjhHDqkAlyTmcydK4iaq4w+maaoq+6g8Cvc5zWHdy27g3AJ0L9w 4BtnO2dBIPS0cSxHMdf2ur7FdQqWkBQ6iYDWslVrSeT/ddy6dlC2I5ljN w==; X-IronPort-AV: E=McAfee;i="6600,9927,10916"; a="425278189" X-IronPort-AV: E=Sophos;i="6.04,256,1695711600"; d="scan'208";a="425278189" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Dec 2023 13:01:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10916"; a="837448544" X-IronPort-AV: E=Sophos;i="6.04,256,1695711600"; d="scan'208";a="837448544" Received: from irvmail002.ir.intel.com ([10.43.11.120]) by fmsmga008.fm.intel.com with ESMTP; 06 Dec 2023 13:01:00 -0800 Received: from lincoln.igk.intel.com (lincoln.igk.intel.com [10.102.21.235]) by irvmail002.ir.intel.com (Postfix) with ESMTP id 6716B32A17; Wed, 6 Dec 2023 21:00:58 +0000 (GMT) From: Larysa Zaremba To: bpf@vger.kernel.org Cc: Larysa Zaremba , netdev@vger.kernel.org, Alexei Starovoitov , Daniel Borkmann , "David S. Miller" , Jakub Kicinski , Jesper Dangaard Brouer , Eric Dumazet , Magnus Karlsson , Willem de Bruijn , Yunsheng Lin , Maciej Fijalkowski , John Fastabend , Aleksander Lobakin Subject: [PATCH bpf-next v4 2/2] net, xdp: allow metadata > 32 Date: Wed, 6 Dec 2023 21:59:19 +0100 Message-ID: <20231206205919.404415-3-larysa.zaremba@intel.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231206205919.404415-1-larysa.zaremba@intel.com> References: <20231206205919.404415-1-larysa.zaremba@intel.com> Precedence: bulk X-Mailing-List: bpf@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: bpf@iogearbox.net From: Aleksander Lobakin 32 bytes may be not enough for some custom metadata. Relax the restriction, allow metadata larger than 32 bytes and make __skb_metadata_differs() work with bigger lengths. Now size of metadata is only limited by the fact it is stored as u8 in skb_shared_info, so maximum possible value is 255. Size still has to be aligned to 4, so the actual upper limit becomes 252. Most driver implementations will offer less, none can offer more. Other important conditions, such as having enough space for xdp_frame building, are already checked in bpf_xdp_adjust_meta(). Signed-off-by: Aleksander Lobakin Signed-off-by: Larysa Zaremba --- include/linux/skbuff.h | 13 ++++++++----- include/net/xdp.h | 7 ++++++- 2 files changed, 14 insertions(+), 6 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index b370eb8d70f7..df6ef42639d8 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -4247,10 +4247,13 @@ static inline bool __skb_metadata_differs(const struct sk_buff *skb_a, { const void *a = skb_metadata_end(skb_a); const void *b = skb_metadata_end(skb_b); - /* Using more efficient varaiant than plain call to memcmp(). */ -#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && BITS_PER_LONG == 64 u64 diffs = 0; + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || + BITS_PER_LONG != 64) + goto slow; + + /* Using more efficient variant than plain call to memcmp(). */ switch (meta_len) { #define __it(x, op) (x -= sizeof(u##op)) #define __it_diff(a, b, op) (*(u##op *)__it(a, op)) ^ (*(u##op *)__it(b, op)) @@ -4270,11 +4273,11 @@ static inline bool __skb_metadata_differs(const struct sk_buff *skb_a, fallthrough; case 4: diffs |= __it_diff(a, b, 32); break; + default: +slow: + return memcmp(a - meta_len, b - meta_len, meta_len); } return diffs; -#else - return memcmp(a - meta_len, b - meta_len, meta_len); -#endif } static inline bool skb_metadata_differs(const struct sk_buff *skb_a, diff --git a/include/net/xdp.h b/include/net/xdp.h index 349c36fb5fd8..5d3673afc037 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -369,7 +369,12 @@ xdp_data_meta_unsupported(const struct xdp_buff *xdp) static inline bool xdp_metalen_invalid(unsigned long metalen) { - return (metalen & (sizeof(__u32) - 1)) || (metalen > 32); + unsigned long meta_max; + + meta_max = type_max(typeof_member(struct skb_shared_info, meta_len)); + BUILD_BUG_ON(!__builtin_constant_p(meta_max)); + + return !IS_ALIGNED(metalen, sizeof(u32)) || metalen > meta_max; } struct xdp_attachment_info {