From patchwork Sun Sep 4 13:15:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 12965182 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FDCBECAAD3 for ; Sun, 4 Sep 2022 13:16:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231446AbiIDNQA (ORCPT ); Sun, 4 Sep 2022 09:16:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231422AbiIDNPz (ORCPT ); Sun, 4 Sep 2022 09:15:55 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 21E013055E for ; Sun, 4 Sep 2022 06:15:54 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 3740160F80 for ; Sun, 4 Sep 2022 13:15:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 171A3C433C1; Sun, 4 Sep 2022 13:15:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1662297352; bh=7nyXra22EY7jbNUxNrCCGBkpFVGg7LCTC7gY34Dq0AE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S1WjPp2KSmSa4h/REoWqOJzN2JcMfhCqrR5uUwY8hJF0IshWRhQ2u7eaBlfzibxr8 qboOgXLIBmfRSYRoy8pzQHL2AZF/d1UqRUhN9X9AHJmlMsvUDWYfNf0mtpvHNeUh6C EQza8xNzvBqQfCYXn226s8adzimQj1QnOvVqz2DtxsDna5sOKTWIjIsw4Z8M7OwsNN 1cnodRDa3iRF6aw7MzxK8S8qYw/y6IiAHTilK/zA/203qOWTIfmzzNpTV3v0PCgODV x+1B/NdPI/iiZ/Zf6MWH+XdyZ2kvxQ74d8XUSpZ11djUKW5uZuOzUW27FP+e8ZULqE SlgMw+jQENAgw== From: Leon Romanovsky To: Steffen Klassert Cc: Leon Romanovsky , "David S. Miller" , Eric Dumazet , Herbert Xu , Jakub Kicinski , netdev@vger.kernel.org, Paolo Abeni , Raed Salem , Saeed Mahameed , Bharat Bhushan Subject: [PATCH RFC xfrm-next v3 1/8] xfrm: add new full offload flag Date: Sun, 4 Sep 2022 16:15:35 +0300 Message-Id: X-Mailer: git-send-email 2.37.2 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Leon Romanovsky In the next patches, the xfrm core code will be extended to support new type of offload - full offload. In that mode, both policy and state should be specially configured in order to perform whole offloaded data path. Full offload takes care of encryption, decryption, encapsulation and other operations with headers. As this mode is new for XFRM policy flow, we can "start fresh" with flag bits and release first and second bit for future use. Reviewed-by: Raed Salem Signed-off-by: Leon Romanovsky --- include/net/xfrm.h | 7 +++++++ include/uapi/linux/xfrm.h | 6 ++++++ net/xfrm/xfrm_device.c | 3 +++ net/xfrm/xfrm_user.c | 2 ++ 4 files changed, 18 insertions(+) diff --git a/include/net/xfrm.h b/include/net/xfrm.h index 6e8fa98f786f..b4d487053dfd 100644 --- a/include/net/xfrm.h +++ b/include/net/xfrm.h @@ -131,12 +131,19 @@ enum { XFRM_DEV_OFFLOAD_OUT, }; +enum { + XFRM_DEV_OFFLOAD_UNSPECIFIED, + XFRM_DEV_OFFLOAD_CRYPTO, + XFRM_DEV_OFFLOAD_FULL, +}; + struct xfrm_dev_offload { struct net_device *dev; netdevice_tracker dev_tracker; struct net_device *real_dev; unsigned long offload_handle; u8 dir : 2; + u8 type : 2; }; struct xfrm_mode { diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h index 4f84ea7ee14c..463c6c1af23a 100644 --- a/include/uapi/linux/xfrm.h +++ b/include/uapi/linux/xfrm.h @@ -519,6 +519,12 @@ struct xfrm_user_offload { */ #define XFRM_OFFLOAD_IPV6 1 #define XFRM_OFFLOAD_INBOUND 2 +/* Two bits above are relevant for state path only, while + * offload is used for both policy and state flows. + * + * In policy offload mode, they are free and can be safely reused. + */ +#define XFRM_OFFLOAD_FULL 4 struct xfrm_userpolicy_default { #define XFRM_USERPOLICY_UNSPEC 0 diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c index 637ca8838436..6d1124eb1ec8 100644 --- a/net/xfrm/xfrm_device.c +++ b/net/xfrm/xfrm_device.c @@ -270,12 +270,15 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x, else xso->dir = XFRM_DEV_OFFLOAD_OUT; + xso->type = XFRM_DEV_OFFLOAD_CRYPTO; + err = dev->xfrmdev_ops->xdo_dev_state_add(x); if (err) { xso->dev = NULL; xso->dir = 0; xso->real_dev = NULL; netdev_put(dev, &xso->dev_tracker); + xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED; if (err != -EOPNOTSUPP) return err; diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index 2ff017117730..9c0aef815730 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -854,6 +854,8 @@ static int copy_user_offload(struct xfrm_dev_offload *xso, struct sk_buff *skb) xuo->ifindex = xso->dev->ifindex; if (xso->dir == XFRM_DEV_OFFLOAD_IN) xuo->flags = XFRM_OFFLOAD_INBOUND; + if (xso->type == XFRM_DEV_OFFLOAD_FULL) + xuo->flags |= XFRM_OFFLOAD_FULL; return 0; }