From patchwork Tue Jun 28 19:48:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 12898867 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88966CCA47F for ; Tue, 28 Jun 2022 19:53:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232953AbiF1Txb (ORCPT ); Tue, 28 Jun 2022 15:53:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232693AbiF1TvO (ORCPT ); Tue, 28 Jun 2022 15:51:14 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A53942E9FA; Tue, 28 Jun 2022 12:50:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1656445804; x=1687981804; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kf6pyiBAKh8PfjaMPwYDzTHelnIuBG2itheZfTrrJ9k=; b=Bt1WTvpGUKeNpAUYxZtPcTdoDmTu0ZXBBNQm4Qv4aefbgkaUFRUWOvJP YSeO50iCTD1kZWrGLBErJWhYczVJG+r8InAvlH7fa2i1e40h6kuAtWJu2 Ws/8TrYicsY6nviLg5Ipcc0GkQ7rh+yUbgIIfiPS35kXD6fAAi9fqdTjr i4ysNmBsUR2XMCBCibj/Nl+ghdmFuaWe30NZ0NGpbK/uvtbneDpFUGhgb nsslFJ3GIJejEMpQLJmCM5H3EObtL7AQ259xS1CAHN2vlYDh3Wv+21B3j MlDqFWowV+7R9erB/lIrD5y3Djh1t2ZkDc95oCTATlHaxw3STif9dnFfE w==; X-IronPort-AV: E=McAfee;i="6400,9594,10392"; a="343523405" X-IronPort-AV: E=Sophos;i="5.92,229,1650956400"; d="scan'208";a="343523405" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2022 12:50:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,229,1650956400"; d="scan'208";a="623054245" Received: from irvmail001.ir.intel.com ([10.43.11.63]) by orsmga001.jf.intel.com with ESMTP; 28 Jun 2022 12:49:59 -0700 Received: from newjersey.igk.intel.com (newjersey.igk.intel.com [10.102.20.203]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id 25SJmr9m022013; Tue, 28 Jun 2022 20:49:57 +0100 From: Alexander Lobakin To: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko Cc: Alexander Lobakin , Larysa Zaremba , Michal Swiatkowski , Jesper Dangaard Brouer , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= , Magnus Karlsson , Maciej Fijalkowski , Jonathan Lemon , Toke Hoiland-Jorgensen , Lorenzo Bianconi , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesse Brandeburg , John Fastabend , Yajun Deng , Willem de Bruijn , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, xdp-hints@xdp-project.net Subject: [PATCH RFC bpf-next 48/52] libbpf: compress Endianness ops with a macro Date: Tue, 28 Jun 2022 21:48:08 +0200 Message-Id: <20220628194812.1453059-49-alexandr.lobakin@intel.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220628194812.1453059-1-alexandr.lobakin@intel.com> References: <20220628194812.1453059-1-alexandr.lobakin@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org X-Patchwork-Delegate: bpf@iogearbox.net X-Patchwork-State: RFC All of the Endianness helpers for BPF programs have the same pattern and can be defined using a compression macro, which will also protect against typos and copy-paste mistakes. Not speaking of saving locs, of course. Ahh, if we only could define macros inside other macros. Signed-off-by: Alexander Lobakin --- tools/lib/bpf/bpf_endian.h | 26 +++++++++----------------- 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/tools/lib/bpf/bpf_endian.h b/tools/lib/bpf/bpf_endian.h index ec9db4feca9f..b03db6aa3f14 100644 --- a/tools/lib/bpf/bpf_endian.h +++ b/tools/lib/bpf/bpf_endian.h @@ -77,23 +77,15 @@ # error "Fix your compiler's __BYTE_ORDER__?!" #endif -#define bpf_htons(x) \ +#define __bpf_endop(op, x) \ (__builtin_constant_p(x) ? \ - __bpf_constant_htons(x) : __bpf_htons(x)) -#define bpf_ntohs(x) \ - (__builtin_constant_p(x) ? \ - __bpf_constant_ntohs(x) : __bpf_ntohs(x)) -#define bpf_htonl(x) \ - (__builtin_constant_p(x) ? \ - __bpf_constant_htonl(x) : __bpf_htonl(x)) -#define bpf_ntohl(x) \ - (__builtin_constant_p(x) ? \ - __bpf_constant_ntohl(x) : __bpf_ntohl(x)) -#define bpf_cpu_to_be64(x) \ - (__builtin_constant_p(x) ? \ - __bpf_constant_cpu_to_be64(x) : __bpf_cpu_to_be64(x)) -#define bpf_be64_to_cpu(x) \ - (__builtin_constant_p(x) ? \ - __bpf_constant_be64_to_cpu(x) : __bpf_be64_to_cpu(x)) + __bpf_constant_##op(x) : __bpf_##op(x)) + +#define bpf_htons(x) __bpf_endop(htons, x) +#define bpf_ntohs(x) __bpf_endop(ntohs, x) +#define bpf_htonl(x) __bpf_endop(htonl, x) +#define bpf_ntohl(x) __bpf_endop(ntohl, x) +#define bpf_cpu_to_be64(x) __bpf_endop(cpu_to_be64, x) +#define bpf_be64_to_cpu(x) __bpf_endop(be64_to_cpu, x) #endif /* __BPF_ENDIAN__ */