From patchwork Mon Mar 21 10:42:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787075 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E04D6C433EF for ; Mon, 21 Mar 2022 10:42:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346431AbiCUKoA (ORCPT ); Mon, 21 Mar 2022 06:44:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34260 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236441AbiCUKn5 (ORCPT ); Mon, 21 Mar 2022 06:43:57 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B87B1427DD for ; Mon, 21 Mar 2022 03:42:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PeRTIkX9UAZGi1qta1qfajU0PaZKDTcpPoCvHlQQC9D4kpy4kkOOvJvgqtCFMiObNR1DZwliX8ZPx1gJZKj8TAwlF9T7+DHEDWuafjgAIGXOy614O7gkZpi/nEWwtyNYxgL8xGmxecfyreuapRlmOcpMhp596AXM7WfNzbMupniK5lwqXy2ELOET06uHEb4TKCk1n2sf93aZ8jSa0iLNXplG2CazXiAvpiU5XHxoZpsYViJSAOp/W0u2yY7kVCp9w5ELrDtvr7L8Q2TtJ/Ry66N0Dvm2cfs6EWDDTXg/+Oop2yUjKtP8Uv7lVlMI1gsVJmwu2FghFVV/eECS1Wf+lA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/bvmNd3fNL0+vkiDsCp3Qxo9gTvFTWibHqUOwX3oEH8=; b=NFmeAfcF/qPdYWItrbuGWDuIMeLCWWIQ7rA9qt4njW5Ho3QpU1Or4KRS2hNdOUGqzIEDEONJM+aqAQvqp0ZIN+j1i02Ra2mq+dpvuZCnnY3zuPfKwsXxoH0PAFzuZblApI9SFlLruXObzv6TcOWrFNrqAcrMC1jGma4v5tLUQLtwOHamwQvUAkF5uD8TLyTvP22YLeBNPFBT7Akz2NbsgZm9gSnQZO1IJdoqNAUlFN2vozFCLEE9z4uBbrK6bvL5OuJB1EkxzKEk2aAZaW1VxWwjZP175U9Zkt+kq3Rmn5pAGgOB1j5mJmfOyKvAEUlH90amRIYlsJRgg+sO1UHE2w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/bvmNd3fNL0+vkiDsCp3Qxo9gTvFTWibHqUOwX3oEH8=; b=IGDlyfq2UHiiZHMhHKXHDwYKbs7kNkSf9y/lcPdJ96vPeOwte00dOPJcxcnfUBth97AyRpRrV8uZ7rvllk2uxH1DUIfJA9hzuDwaUiO8LzKCNZDcUc7x7I49yoV/rdcLUlnwVncv3W+fH+JaksG6ofjNZyA7ZWFto3ibCEtTNo4= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:27 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:27 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 01/10] nfp: calculate ring masks without conditionals Date: Mon, 21 Mar 2022 11:42:00 +0100 Message-Id: <20220321104209.273535-2-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cb3f5873-38e8-46d5-ee9f-08da0b277ded X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vnGZbLCNhlndWu7YH2hnfxQ2CmvWuUbBu1T7FmcmYsL5JumqCmAIvYrGw8iQJOMoGuN3dBtstYPJEuiD5OV6CwWPYt00W1kqx2mRMfdOnxZlWebRvz+aG+rsCNM/q886ao7ozePHnYJTbv+SngQNtCTr7WzwaeWLeyLN/6ab0a1pHglxk2ukIwPPGnsLV+xPfX2n785sBStOp1HzJUv6sjNW+a4oz146SoW43hVFCRCaK1bwGk3pAXtnlSVZzUotpr0IruzqBHV6lyzJPZfkwV93aNhArzkHtVVUOPCyxrT0AHrAoXctkg6PXrgHBd4Nd8ZOaxBdyVmqVT52qIofYBtgpGb+SP/dcgXMMZmW5h54cc4rZLfN4nuGfTX9hzX+uSL7/Dg+z9Fqpftq9gFbrY9J0x8U50xstdUmmhWeFN9cOeRl04loOi3vkgwcf1YzLgD1ektSmRY5ggIlJCeTsJI40vBWSVbaik/IMCuXhy6993YCNWX04XKtSGakk0zb3MUt6eh5oWniTDVivthnwf7Nmvi64NOCbFsfHdUuhU9EdVYm36YtWyPZ+EpDG1s6kTIh8PLFy0zT1ViBeRyLanqLrBUyAMqDG50la8hR+QYXRp7j+j4TZ4x3DA2GRPRFvIuQZMWbclWHWOXFUReo6A== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(83380400001)(508600001)(36756003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: K/1+2jlQIuQCCSnujw2+u3GCvN97cS2YCkeQHqeqDRZ4G4WG5VXCI18M6+0bA99LA0TQ+339m4EMA6tlpZaLBZD2vteAWnvY7dXuYZV0ojM9zT6HBPmsw2N/9rBs5cK/3WPmydX7CjDn5nzAuoDvEiEQ1ro+o/RGZps+JdZecdQQ/maqYFeYNxw8CTZ0DJMwQjfApdq4Symlr+7euCdDAK1XJObeXm7Pb9ce3WJg2mn939Y/H5pBeKZhTWtfwy1axF6Q6rUaaV8VCmOvJqGYaz+WFVQ7Ugxv43xI6PxL3S16HyzOBvnyz0qtELA7h8Pr57Ndk/2qOPdgGm9dClyxxyUx9PTn2VN0SJMpdxDe2XX53RIuWXkUg4VjLatX8UJtyUbsWF4GVPh2zGAsBQSE8B0thkNkarrFwifsa3jPQEvcmUvs0JSjBFihfBVA1Oq2rQCk4bZonHBdOF9lc5OHudZ1qKiSk5QLZeVZZv5mj+M9iHUeKTAVFUIC3Vc42S3eeU3PIWznDfaLkkrTNsHQ3TZx1UnhmhmX4ZReD4J2eFGJNDQ8RO3DhFD2dDDkA6BqjXMYmSm1D20H/zQKzm7xXP+SfeFTH7t9MrCzIJ1hudXZwjJ/vcX/JIyd8d2EIXcRYLna2eHxAvCcK2IkWmQ4gtBGRkBMwLNM7uTt9uqpiFTR8iu5SGvXyYIFEl/w/7Xd6DiEtJw+PMOmWwZuIpKw9iq0BjClgB1KVGxeZfPMHLIDGtKKhLpNHfI8rlxeF1i2KmHDOGXUwuWSznYUgWCs2Gri5pSIwVYs7QtTYyKLLKmMkXDQF7P2Ex3sGsEZjqlcOztLkCwb+pHO5PY3BzfPECJEmLYhizRCP09gCs8x/hy/WTFEw7EVZAbCMv1CYJuYgRz1ViUOUGJ86ziL1rDeVnG9mbN4MsCRSznlB9u3uBdPev/qKUrKORICdOr6al5s3gmxLYqAeWZCRWehYBjODxFQMl5VOd7unjiMXc58fPizIQzvTmgG8UxbqjCOEDlAfqXkNB4ISqAKSv2JyZhPcs94AVnaevDPWCNyk3YRRFqwTUGnz4xoWZOcl54DeHVwZBbXnGdjuLv67YqOwD3IiP9C7xX3YZEL5ZTLQKaQ3d3E5xTJ1Q0hVdgpYl8pjfJwtLnaJAeZpP0/vNr/2s+En8KMZHHx+an8b52bbyMuSgfhSefxOyfCeBLmLpy8SxTOfPegoT4HKazTpDx6T/upMOcz0X++Mg7iLyaLtyNTcvUTEX+vLsU6F4oilE0bPw2eEmAd3h7ALRxXSC1D9Jofg7re5VeskbQP2F70P11UEbTT3efuqBMpAROEUoLWtriW6HXl9IY6ncXZ1GJQ6oma7lZsP7VJzOVD8XpfmeIizZBfuvZcnoqBHR2KlNUP4zOycX7tZ8z5ow1rpdmuU3XPt8kGMKlkODSsqjrufK1MDKrJwluD15Q0Xz/hP1hs6yW3syKFd93gKigmglyOHEl7x9wh7c/22rUeG1qc6kYU3mw+b4bui/B32YcN8Va4FCeapPaGCwzqefc7VdFVi3Dm0g== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: cb3f5873-38e8-46d5-ee9f-08da0b277ded X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:27.1502 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: yLRGtwVa5ROv00qkIgDJzP21YQ6MOkczC9W1ZyngRAZAIsIXuKQ59RBR1f1KoiI2eE+cH3t2Z7UCvHPvhGN6i+AAo90SPSEDU0xVvxJlLYA= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski Ring enable masks are 64bit long. Replace mask calculation from: block_cnt == 64 ? 0xffffffffffffffffULL : (1 << block_cnt) - 1 with: (U64_MAX >> (64 - block_cnt)) to simplify the code. Signed-off-by: Jakub Kicinski Signed-off-by: Fei Qin Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/nfp_net_common.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index ef8645b77e79..50f7ada0dedd 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -2959,11 +2959,11 @@ static int nfp_net_set_config_and_enable(struct nfp_net *nn) for (r = 0; r < nn->dp.num_rx_rings; r++) nfp_net_rx_ring_hw_cfg_write(nn, &nn->dp.rx_rings[r], r); - nn_writeq(nn, NFP_NET_CFG_TXRS_ENABLE, nn->dp.num_tx_rings == 64 ? - 0xffffffffffffffffULL : ((u64)1 << nn->dp.num_tx_rings) - 1); + nn_writeq(nn, NFP_NET_CFG_TXRS_ENABLE, + U64_MAX >> (64 - nn->dp.num_tx_rings)); - nn_writeq(nn, NFP_NET_CFG_RXRS_ENABLE, nn->dp.num_rx_rings == 64 ? - 0xffffffffffffffffULL : ((u64)1 << nn->dp.num_rx_rings) - 1); + nn_writeq(nn, NFP_NET_CFG_RXRS_ENABLE, + U64_MAX >> (64 - nn->dp.num_rx_rings)); if (nn->dp.netdev) nfp_net_write_mac_addr(nn, nn->dp.netdev->dev_addr); From patchwork Mon Mar 21 10:42:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787080 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B8E2C433F5 for ; Mon, 21 Mar 2022 10:42:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346432AbiCUKoQ (ORCPT ); Mon, 21 Mar 2022 06:44:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346436AbiCUKoE (ORCPT ); Mon, 21 Mar 2022 06:44:04 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB026149258 for ; Mon, 21 Mar 2022 03:42:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ckTdfUlWLPOw3ydPppkgSDnlAWto7+otP09QDPkB9EZCBNPcjsSpJxQWksMksmwQPJR5yEfDN2K86DJWIAHTaQ6mdAoZrHGOQP8rcBXdTA2LLI6bsXp30J5ET73+Jqlr3yITB6mElZYiFkoFA2f94uUTZL1d+ZiXlNyUejR9DswcTatulz9IxC9xxk48dY5ATu3ALSH9Se91eBwA2GMVSvlASk7Y+qMxHw3yFbmTFNVntWWN1Nki7H9A88SxLM1ZAHYO/jlJwzbLJfT1v2gLJoylFrUnnPFH5bBSwgKafkbkEX/Qjzo3vYe5VJrUk8tYE5nilgHisSgABRZdCzq/ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Bh7xTEhBrgPOuJuXGYPro0CqElhAKmfLANJgqEXKH0U=; b=GCCcWbN1ex03RgypO2ztHBi4CzsG8sM/DaKGQR6bZ/xpEY3kzK/LHWug1eG1MOCsTJdqA1AmQmhFh/o6UZblfMgYYv81p88bhggDl3sLSCLGDPdMJog6iC8d8RY8UlkwCg6DGk3lH8LMAub4FltEaJLr3p1U2gtt/FrVd0VZ5bnNRMCdwtYOeWzdGOI8iGyFVxKOT8xFHn8YGrrDBxlhfOCXSNB0D0S5AeRIYd822a5xyx5OKtyeX4+t5sGDcEm6S27LLCgIfXBrGCwqdxz4L7ZoyxwP5x+2cNCIHFqazUl6n4Ip2G6n0aRp9PcPw/i86rZ3C4D+9T5uVX2Y9YN9AQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Bh7xTEhBrgPOuJuXGYPro0CqElhAKmfLANJgqEXKH0U=; b=LE/ekGS+yQG4dhdf9Y13XzbLnbziCz78j8nYMRtK++D1anUhhPCxqr2r47CwwlYR7rj7rc1jFaqgymYOZjuy7pUOpuzbVZP7ZZ5IV89NPYrYiQB2vzLff4V6LkDPqfsRVh1WB/+qGJhqgI46dETtl+iylpIh5Z2ZYH/cU2YJvdI= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:29 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:28 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 02/10] nfp: move the fast path code to separate files Date: Mon, 21 Mar 2022 11:42:01 +0100 Message-Id: <20220321104209.273535-3-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0dbd9b10-b7fe-4f8f-97c4-08da0b277eb8 X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vRTjX0iVZxsL1rXJoDwLjH6iVUIFBCXNLJKtKgrXjovKWcIOwm0XZV8+SCcTQFw0kAfYNBTlOVpEXWvr8B30nhQov+osFpwWJiZIDiRnPiS+XGGO4Q0cQqEG2xiQpvKcvgGkDcjI8gacET8L8vUg8Q+ZwX7DKsJOSU2UF4v3w+ETgCLWiAC9/vk4nZC7aMUYm5KHNSaCMqNax3/kYX+bZ3I1x3qLNRkvmTQbORWl9S0RuBN+KlXhDivBfn2TScMKszeyXqzBzlj03tpaWu8Y406uyehHg/467p2ZjyfIxzyCxXuWcbNW37wWdYRnNSnVbcQNYb8RIAc2ab7B8uzDpGYm28msv1J2Gi+8XRpvo5Qs039N3lA9pGinRl16+vEGZLAHKS4DkR0Xh0zAGXqChzMbagPF8dIh89ZbHsp1q8qFAI/ZPNZSEKyc7WeHe4j+fx2pUlXmG1BXj/08ZV/l2p4sUrTwPwWfxGgjWIU4wAiFUQiPSaS4ZVZugBrwruiZ7c2ZF36l8ylY4r93qHiHXt8hSZMEq/cZj4rVlAQPkfOSV+PzJtpu5VRhI4rXPHE6+EMsIhA3mzF0KjP4szjr7j99Iis0c7lfs11O3fQK3FN7LFuzJIDM5uOuxvTzWVhxNWGEguhrVd5lMLnebIEt5Jm22+Mx6yOInFHHbU1ibu8Q2X/t3sTWIX0F8jD2ziCmPmdZ/UJHWqhYmgYJ91HuTA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(30864003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(66574015)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(83380400001)(508600001)(36756003)(579004)(559001)(309714004);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: /Ai0IH5hqiHUtFoA7pkethm/eoAm0tPQgGfHXpFFfy8tM5Ylz1bJ/SvDpTT2TyIIJdqmM8BBwAg12R+W1u8QWW8HOclGnzugDh1I1zPSkHczAxnBdlTEFVTbu1dfa66Ty7xxpQKgjJRCvb2IOJGEVzhkMRome8h0ufULyHCUlq/jhxhzlJf71hvURysZm+7ILyNxzWsQvhlSKvJkH8bZ97PTsA852AvH4Usr0+NIOh8cjRXz7JnR5uJ1JPK89jAuEcALeme2evCqk1CZ/7vrzU8mgX64W1ISGAYToppz2BZLPBrPAl+G/rF1TJ6XpUXtm3ZK3SfubwQAUtUY7T7zyUOURzz8OmjOvRGY7zCjqvIwJ3NKvsASSvl8hlb42ngAzzg/pCbnCKtjYxzSZvuNURYKsVUJNvy8/n4E67PachEotvznIz1iJtDWRe/dIYrvmYm/fISB9d2pg42bUOs52H2/G19PmEB/xxKvNtalCotsYFtYkyZ3tUGqohOHTtnjj2hCbK6Igv6gmi8zhHkaVYxD2ntdqr89K31pKQ+9R2t1Me/0F+eraO4NFUyYONgMgqYQgY9FU6Ak/Y0SsxFuMpi2pnDiWrVYswpie1dsmZDKfarf+WW8GiIZIkwpIy6O39x/gTrs9mfGxRqS1F0YREU+cCD/536U4E8nMMOuLmADAKEH5fJfuRiys87Z18XWXNr4CpNlMOOAUdb3DXN3G0rSp5sw0Sbot/uXHpb+YqImrJo/KF5FJXLhpzw4wntQCfnvjOJNrZ1sKxIf4bZr1WhxtSoSv6B34BG6Rsxwi8jZyyp8+RX1lekBnoL+NuIL23jJSfF2A5P2rEb+UoDWArT3+k4auLsMaJ7mub4j9jbpJEYlDJ/yUfQh1WaOWx0f1sfkmpXQE16b8PNTiOctDlO8mvzoCxxS19BvgibhASUlnCT07Asn2ttHrCErKZa5jA2rWdqTnJEY3+D+zqelfRzwAsTBOUVuu54iWNSmdkx4tw1+FBwLVB3rbQHDW7IrCDPwsCcdnPN+UmWlCLVUuYDSxG8X+f4oE7pVOE5fPokKKJEkRh/XhTfD9ehOeZxUL9KmZsLEfrL9S5THg3KbqRLGGXZkECVsyzpA4jIYjKaPDHe7AV0g3xRRW2Byw48SFD6IzH7hNOznHT/F6yJ2PGl1nyDyZuyMTR4WbN9upgpRJ5zRJcP84jrVynapP8MllXdEAxiqSEjKqlDaryt6JtzdkfCOZ3YFHe5/AkPq5ZfAjz7QuEr3M1ELvot82C5H3839g2w0xjfCDHcHbQoLHwMr+n/y1zwTI97+InDan+zcgcTyRbOptvOWswJ9kUj1nsYjC2r72aTUEKpRALbNXw7E3F7vWWyLdkYt9EQlbRNxvIV4ZSnmUhqJKyzFsyoe5oLGI3HRZcsyiL5IxiPOObUc/0JMxLvAux2QThCKsn0PnEK61txXf0eO8UCsNVvhRg0nyZr4Jz4G6zTwa1yBhHEAU4bWNeAX5cuhXJyK1Pqrhib0fk2HO6OTseF8e/iH58h8td+fC7Iz/1HTrLeJBA== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0dbd9b10-b7fe-4f8f-97c4-08da0b277eb8 X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:28.7908 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: nhWvZvdV5gIwa2P17f50EdVfGRkP323jIJqXKpRlAg644PV2cOBeuyuHWo7FeWh4HCTWeDmEp0LE+RX4j0tvz8jJPH+T4WQ3FkXn0V7Ba1I= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski In preparation for support for a new datapath format move all ring and fast path logic into separate files. It is basically a verbatim move with some wrapping functions, no new structures and functions added. The current data path is called NFD3 from the initial version of the driver ABI it used. The non-fast path, but ring related functions are moved to nfp_net_dp.c file. Changes to Jakub's work: * Rebase on xsk related code. * Split the patch, move the callback changes to next commit. Signed-off-by: Jakub Kicinski Signed-off-by: Fei Qin Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/Makefile | 4 + drivers/net/ethernet/netronome/nfp/nfd3/dp.c | 1368 +++++++++++ .../net/ethernet/netronome/nfp/nfd3/nfd3.h | 126 + .../net/ethernet/netronome/nfp/nfd3/rings.c | 243 ++ drivers/net/ethernet/netronome/nfp/nfd3/xsk.c | 408 ++++ drivers/net/ethernet/netronome/nfp/nfp_net.h | 102 +- .../ethernet/netronome/nfp/nfp_net_common.c | 2041 +---------------- .../ethernet/netronome/nfp/nfp_net_debugfs.c | 46 +- .../net/ethernet/netronome/nfp/nfp_net_dp.c | 448 ++++ .../net/ethernet/netronome/nfp/nfp_net_dp.h | 120 + .../net/ethernet/netronome/nfp/nfp_net_xsk.c | 436 +--- .../net/ethernet/netronome/nfp/nfp_net_xsk.h | 16 +- 12 files changed, 2782 insertions(+), 2576 deletions(-) create mode 100644 drivers/net/ethernet/netronome/nfp/nfd3/dp.c create mode 100644 drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h create mode 100644 drivers/net/ethernet/netronome/nfp/nfd3/rings.c create mode 100644 drivers/net/ethernet/netronome/nfp/nfd3/xsk.c create mode 100644 drivers/net/ethernet/netronome/nfp/nfp_net_dp.c create mode 100644 drivers/net/ethernet/netronome/nfp/nfp_net_dp.h diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile index a35a382441d7..69168c03606f 100644 --- a/drivers/net/ethernet/netronome/nfp/Makefile +++ b/drivers/net/ethernet/netronome/nfp/Makefile @@ -20,12 +20,16 @@ nfp-objs := \ ccm_mbox.o \ devlink_param.o \ nfp_asm.o \ + nfd3/dp.o \ + nfd3/rings.o \ + nfd3/xsk.o \ nfp_app.o \ nfp_app_nic.o \ nfp_devlink.o \ nfp_hwmon.o \ nfp_main.o \ nfp_net_common.o \ + nfp_net_dp.o \ nfp_net_ctrl.o \ nfp_net_debugdump.o \ nfp_net_ethtool.o \ diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c new file mode 100644 index 000000000000..b2a34a09471b --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c @@ -0,0 +1,1368 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2015-2019 Netronome Systems, Inc. */ + +#include +#include + +#include "../nfp_app.h" +#include "../nfp_net.h" +#include "../nfp_net_dp.h" +#include "../nfp_net_xsk.h" +#include "../crypto/crypto.h" +#include "../crypto/fw.h" +#include "nfd3.h" + +/* Transmit processing + * + * One queue controller peripheral queue is used for transmit. The + * driver en-queues packets for transmit by advancing the write + * pointer. The device indicates that packets have transmitted by + * advancing the read pointer. The driver maintains a local copy of + * the read and write pointer in @struct nfp_net_tx_ring. The driver + * keeps @wr_p in sync with the queue controller write pointer and can + * determine how many packets have been transmitted by comparing its + * copy of the read pointer @rd_p with the read pointer maintained by + * the queue controller peripheral. + */ + +/* Wrappers for deciding when to stop and restart TX queues */ +static int nfp_nfd3_tx_ring_should_wake(struct nfp_net_tx_ring *tx_ring) +{ + return !nfp_net_tx_full(tx_ring, MAX_SKB_FRAGS * 4); +} + +static int nfp_nfd3_tx_ring_should_stop(struct nfp_net_tx_ring *tx_ring) +{ + return nfp_net_tx_full(tx_ring, MAX_SKB_FRAGS + 1); +} + +/** + * nfp_nfd3_tx_ring_stop() - stop tx ring + * @nd_q: netdev queue + * @tx_ring: driver tx queue structure + * + * Safely stop TX ring. Remember that while we are running .start_xmit() + * someone else may be cleaning the TX ring completions so we need to be + * extra careful here. + */ +static void +nfp_nfd3_tx_ring_stop(struct netdev_queue *nd_q, + struct nfp_net_tx_ring *tx_ring) +{ + netif_tx_stop_queue(nd_q); + + /* We can race with the TX completion out of NAPI so recheck */ + smp_mb(); + if (unlikely(nfp_nfd3_tx_ring_should_wake(tx_ring))) + netif_tx_start_queue(nd_q); +} + +/** + * nfp_nfd3_tx_tso() - Set up Tx descriptor for LSO + * @r_vec: per-ring structure + * @txbuf: Pointer to driver soft TX descriptor + * @txd: Pointer to HW TX descriptor + * @skb: Pointer to SKB + * @md_bytes: Prepend length + * + * Set up Tx descriptor for LSO, do nothing for non-LSO skbs. + * Return error on packet header greater than maximum supported LSO header size. + */ +static void +nfp_nfd3_tx_tso(struct nfp_net_r_vector *r_vec, struct nfp_nfd3_tx_buf *txbuf, + struct nfp_nfd3_tx_desc *txd, struct sk_buff *skb, u32 md_bytes) +{ + u32 l3_offset, l4_offset, hdrlen; + u16 mss; + + if (!skb_is_gso(skb)) + return; + + if (!skb->encapsulation) { + l3_offset = skb_network_offset(skb); + l4_offset = skb_transport_offset(skb); + hdrlen = skb_transport_offset(skb) + tcp_hdrlen(skb); + } else { + l3_offset = skb_inner_network_offset(skb); + l4_offset = skb_inner_transport_offset(skb); + hdrlen = skb_inner_transport_header(skb) - skb->data + + inner_tcp_hdrlen(skb); + } + + txbuf->pkt_cnt = skb_shinfo(skb)->gso_segs; + txbuf->real_len += hdrlen * (txbuf->pkt_cnt - 1); + + mss = skb_shinfo(skb)->gso_size & NFD3_DESC_TX_MSS_MASK; + txd->l3_offset = l3_offset - md_bytes; + txd->l4_offset = l4_offset - md_bytes; + txd->lso_hdrlen = hdrlen - md_bytes; + txd->mss = cpu_to_le16(mss); + txd->flags |= NFD3_DESC_TX_LSO; + + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_lso++; + u64_stats_update_end(&r_vec->tx_sync); +} + +/** + * nfp_nfd3_tx_csum() - Set TX CSUM offload flags in TX descriptor + * @dp: NFP Net data path struct + * @r_vec: per-ring structure + * @txbuf: Pointer to driver soft TX descriptor + * @txd: Pointer to TX descriptor + * @skb: Pointer to SKB + * + * This function sets the TX checksum flags in the TX descriptor based + * on the configuration and the protocol of the packet to be transmitted. + */ +static void +nfp_nfd3_tx_csum(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + struct nfp_nfd3_tx_buf *txbuf, struct nfp_nfd3_tx_desc *txd, + struct sk_buff *skb) +{ + struct ipv6hdr *ipv6h; + struct iphdr *iph; + u8 l4_hdr; + + if (!(dp->ctrl & NFP_NET_CFG_CTRL_TXCSUM)) + return; + + if (skb->ip_summed != CHECKSUM_PARTIAL) + return; + + txd->flags |= NFD3_DESC_TX_CSUM; + if (skb->encapsulation) + txd->flags |= NFD3_DESC_TX_ENCAP; + + iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb); + ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb) : ipv6_hdr(skb); + + if (iph->version == 4) { + txd->flags |= NFD3_DESC_TX_IP4_CSUM; + l4_hdr = iph->protocol; + } else if (ipv6h->version == 6) { + l4_hdr = ipv6h->nexthdr; + } else { + nn_dp_warn(dp, "partial checksum but ipv=%x!\n", iph->version); + return; + } + + switch (l4_hdr) { + case IPPROTO_TCP: + txd->flags |= NFD3_DESC_TX_TCP_CSUM; + break; + case IPPROTO_UDP: + txd->flags |= NFD3_DESC_TX_UDP_CSUM; + break; + default: + nn_dp_warn(dp, "partial checksum but l4 proto=%x!\n", l4_hdr); + return; + } + + u64_stats_update_begin(&r_vec->tx_sync); + if (skb->encapsulation) + r_vec->hw_csum_tx_inner += txbuf->pkt_cnt; + else + r_vec->hw_csum_tx += txbuf->pkt_cnt; + u64_stats_update_end(&r_vec->tx_sync); +} + +static int nfp_nfd3_prep_tx_meta(struct sk_buff *skb, u64 tls_handle) +{ + struct metadata_dst *md_dst = skb_metadata_dst(skb); + unsigned char *data; + u32 meta_id = 0; + int md_bytes; + + if (likely(!md_dst && !tls_handle)) + return 0; + if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX)) { + if (!tls_handle) + return 0; + md_dst = NULL; + } + + md_bytes = 4 + !!md_dst * 4 + !!tls_handle * 8; + + if (unlikely(skb_cow_head(skb, md_bytes))) + return -ENOMEM; + + meta_id = 0; + data = skb_push(skb, md_bytes) + md_bytes; + if (md_dst) { + data -= 4; + put_unaligned_be32(md_dst->u.port_info.port_id, data); + meta_id = NFP_NET_META_PORTID; + } + if (tls_handle) { + /* conn handle is opaque, we just use u64 to be able to quickly + * compare it to zero + */ + data -= 8; + memcpy(data, &tls_handle, sizeof(tls_handle)); + meta_id <<= NFP_NET_META_FIELD_SIZE; + meta_id |= NFP_NET_META_CONN_HANDLE; + } + + data -= 4; + put_unaligned_be32(meta_id, data); + + return md_bytes; +} + +/** + * nfp_nfd3_tx() - Main transmit entry point + * @skb: SKB to transmit + * @netdev: netdev structure + * + * Return: NETDEV_TX_OK on success. + */ +netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev) +{ + struct nfp_net *nn = netdev_priv(netdev); + int f, nr_frags, wr_idx, md_bytes; + struct nfp_net_tx_ring *tx_ring; + struct nfp_net_r_vector *r_vec; + struct nfp_nfd3_tx_buf *txbuf; + struct nfp_nfd3_tx_desc *txd; + struct netdev_queue *nd_q; + const skb_frag_t *frag; + struct nfp_net_dp *dp; + dma_addr_t dma_addr; + unsigned int fsize; + u64 tls_handle = 0; + u16 qidx; + + dp = &nn->dp; + qidx = skb_get_queue_mapping(skb); + tx_ring = &dp->tx_rings[qidx]; + r_vec = tx_ring->r_vec; + + nr_frags = skb_shinfo(skb)->nr_frags; + + if (unlikely(nfp_net_tx_full(tx_ring, nr_frags + 1))) { + nn_dp_warn(dp, "TX ring %d busy. wrp=%u rdp=%u\n", + qidx, tx_ring->wr_p, tx_ring->rd_p); + nd_q = netdev_get_tx_queue(dp->netdev, qidx); + netif_tx_stop_queue(nd_q); + nfp_net_tx_xmit_more_flush(tx_ring); + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_busy++; + u64_stats_update_end(&r_vec->tx_sync); + return NETDEV_TX_BUSY; + } + + skb = nfp_net_tls_tx(dp, r_vec, skb, &tls_handle, &nr_frags); + if (unlikely(!skb)) { + nfp_net_tx_xmit_more_flush(tx_ring); + return NETDEV_TX_OK; + } + + md_bytes = nfp_nfd3_prep_tx_meta(skb, tls_handle); + if (unlikely(md_bytes < 0)) + goto err_flush; + + /* Start with the head skbuf */ + dma_addr = dma_map_single(dp->dev, skb->data, skb_headlen(skb), + DMA_TO_DEVICE); + if (dma_mapping_error(dp->dev, dma_addr)) + goto err_dma_err; + + wr_idx = D_IDX(tx_ring, tx_ring->wr_p); + + /* Stash the soft descriptor of the head then initialize it */ + txbuf = &tx_ring->txbufs[wr_idx]; + txbuf->skb = skb; + txbuf->dma_addr = dma_addr; + txbuf->fidx = -1; + txbuf->pkt_cnt = 1; + txbuf->real_len = skb->len; + + /* Build TX descriptor */ + txd = &tx_ring->txds[wr_idx]; + txd->offset_eop = (nr_frags ? 0 : NFD3_DESC_TX_EOP) | md_bytes; + txd->dma_len = cpu_to_le16(skb_headlen(skb)); + nfp_desc_set_dma_addr(txd, dma_addr); + txd->data_len = cpu_to_le16(skb->len); + + txd->flags = 0; + txd->mss = 0; + txd->lso_hdrlen = 0; + + /* Do not reorder - tso may adjust pkt cnt, vlan may override fields */ + nfp_nfd3_tx_tso(r_vec, txbuf, txd, skb, md_bytes); + nfp_nfd3_tx_csum(dp, r_vec, txbuf, txd, skb); + if (skb_vlan_tag_present(skb) && dp->ctrl & NFP_NET_CFG_CTRL_TXVLAN) { + txd->flags |= NFD3_DESC_TX_VLAN; + txd->vlan = cpu_to_le16(skb_vlan_tag_get(skb)); + } + + /* Gather DMA */ + if (nr_frags > 0) { + __le64 second_half; + + /* all descs must match except for in addr, length and eop */ + second_half = txd->vals8[1]; + + for (f = 0; f < nr_frags; f++) { + frag = &skb_shinfo(skb)->frags[f]; + fsize = skb_frag_size(frag); + + dma_addr = skb_frag_dma_map(dp->dev, frag, 0, + fsize, DMA_TO_DEVICE); + if (dma_mapping_error(dp->dev, dma_addr)) + goto err_unmap; + + wr_idx = D_IDX(tx_ring, wr_idx + 1); + tx_ring->txbufs[wr_idx].skb = skb; + tx_ring->txbufs[wr_idx].dma_addr = dma_addr; + tx_ring->txbufs[wr_idx].fidx = f; + + txd = &tx_ring->txds[wr_idx]; + txd->dma_len = cpu_to_le16(fsize); + nfp_desc_set_dma_addr(txd, dma_addr); + txd->offset_eop = md_bytes | + ((f == nr_frags - 1) ? NFD3_DESC_TX_EOP : 0); + txd->vals8[1] = second_half; + } + + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_gather++; + u64_stats_update_end(&r_vec->tx_sync); + } + + skb_tx_timestamp(skb); + + nd_q = netdev_get_tx_queue(dp->netdev, tx_ring->idx); + + tx_ring->wr_p += nr_frags + 1; + if (nfp_nfd3_tx_ring_should_stop(tx_ring)) + nfp_nfd3_tx_ring_stop(nd_q, tx_ring); + + tx_ring->wr_ptr_add += nr_frags + 1; + if (__netdev_tx_sent_queue(nd_q, txbuf->real_len, netdev_xmit_more())) + nfp_net_tx_xmit_more_flush(tx_ring); + + return NETDEV_TX_OK; + +err_unmap: + while (--f >= 0) { + frag = &skb_shinfo(skb)->frags[f]; + dma_unmap_page(dp->dev, tx_ring->txbufs[wr_idx].dma_addr, + skb_frag_size(frag), DMA_TO_DEVICE); + tx_ring->txbufs[wr_idx].skb = NULL; + tx_ring->txbufs[wr_idx].dma_addr = 0; + tx_ring->txbufs[wr_idx].fidx = -2; + wr_idx = wr_idx - 1; + if (wr_idx < 0) + wr_idx += tx_ring->cnt; + } + dma_unmap_single(dp->dev, tx_ring->txbufs[wr_idx].dma_addr, + skb_headlen(skb), DMA_TO_DEVICE); + tx_ring->txbufs[wr_idx].skb = NULL; + tx_ring->txbufs[wr_idx].dma_addr = 0; + tx_ring->txbufs[wr_idx].fidx = -2; +err_dma_err: + nn_dp_warn(dp, "Failed to map DMA TX buffer\n"); +err_flush: + nfp_net_tx_xmit_more_flush(tx_ring); + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_errors++; + u64_stats_update_end(&r_vec->tx_sync); + nfp_net_tls_tx_undo(skb, tls_handle); + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; +} + +/** + * nfp_nfd3_tx_complete() - Handled completed TX packets + * @tx_ring: TX ring structure + * @budget: NAPI budget (only used as bool to determine if in NAPI context) + */ +void nfp_nfd3_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + u32 done_pkts = 0, done_bytes = 0; + struct netdev_queue *nd_q; + u32 qcp_rd_p; + int todo; + + if (tx_ring->wr_p == tx_ring->rd_p) + return; + + /* Work out how many descriptors have been transmitted */ + qcp_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); + + if (qcp_rd_p == tx_ring->qcp_rd_p) + return; + + todo = D_IDX(tx_ring, qcp_rd_p - tx_ring->qcp_rd_p); + + while (todo--) { + const skb_frag_t *frag; + struct nfp_nfd3_tx_buf *tx_buf; + struct sk_buff *skb; + int fidx, nr_frags; + int idx; + + idx = D_IDX(tx_ring, tx_ring->rd_p++); + tx_buf = &tx_ring->txbufs[idx]; + + skb = tx_buf->skb; + if (!skb) + continue; + + nr_frags = skb_shinfo(skb)->nr_frags; + fidx = tx_buf->fidx; + + if (fidx == -1) { + /* unmap head */ + dma_unmap_single(dp->dev, tx_buf->dma_addr, + skb_headlen(skb), DMA_TO_DEVICE); + + done_pkts += tx_buf->pkt_cnt; + done_bytes += tx_buf->real_len; + } else { + /* unmap fragment */ + frag = &skb_shinfo(skb)->frags[fidx]; + dma_unmap_page(dp->dev, tx_buf->dma_addr, + skb_frag_size(frag), DMA_TO_DEVICE); + } + + /* check for last gather fragment */ + if (fidx == nr_frags - 1) + napi_consume_skb(skb, budget); + + tx_buf->dma_addr = 0; + tx_buf->skb = NULL; + tx_buf->fidx = -2; + } + + tx_ring->qcp_rd_p = qcp_rd_p; + + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_bytes += done_bytes; + r_vec->tx_pkts += done_pkts; + u64_stats_update_end(&r_vec->tx_sync); + + if (!dp->netdev) + return; + + nd_q = netdev_get_tx_queue(dp->netdev, tx_ring->idx); + netdev_tx_completed_queue(nd_q, done_pkts, done_bytes); + if (nfp_nfd3_tx_ring_should_wake(tx_ring)) { + /* Make sure TX thread will see updated tx_ring->rd_p */ + smp_mb(); + + if (unlikely(netif_tx_queue_stopped(nd_q))) + netif_tx_wake_queue(nd_q); + } + + WARN_ONCE(tx_ring->wr_p - tx_ring->rd_p > tx_ring->cnt, + "TX ring corruption rd_p=%u wr_p=%u cnt=%u\n", + tx_ring->rd_p, tx_ring->wr_p, tx_ring->cnt); +} + +static bool nfp_nfd3_xdp_complete(struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + u32 done_pkts = 0, done_bytes = 0; + bool done_all; + int idx, todo; + u32 qcp_rd_p; + + /* Work out how many descriptors have been transmitted */ + qcp_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); + + if (qcp_rd_p == tx_ring->qcp_rd_p) + return true; + + todo = D_IDX(tx_ring, qcp_rd_p - tx_ring->qcp_rd_p); + + done_all = todo <= NFP_NET_XDP_MAX_COMPLETE; + todo = min(todo, NFP_NET_XDP_MAX_COMPLETE); + + tx_ring->qcp_rd_p = D_IDX(tx_ring, tx_ring->qcp_rd_p + todo); + + done_pkts = todo; + while (todo--) { + idx = D_IDX(tx_ring, tx_ring->rd_p); + tx_ring->rd_p++; + + done_bytes += tx_ring->txbufs[idx].real_len; + } + + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_bytes += done_bytes; + r_vec->tx_pkts += done_pkts; + u64_stats_update_end(&r_vec->tx_sync); + + WARN_ONCE(tx_ring->wr_p - tx_ring->rd_p > tx_ring->cnt, + "XDP TX ring corruption rd_p=%u wr_p=%u cnt=%u\n", + tx_ring->rd_p, tx_ring->wr_p, tx_ring->cnt); + + return done_all; +} + +/* Receive processing + */ + +static void * +nfp_nfd3_napi_alloc_one(struct nfp_net_dp *dp, dma_addr_t *dma_addr) +{ + void *frag; + + if (!dp->xdp_prog) { + frag = napi_alloc_frag(dp->fl_bufsz); + if (unlikely(!frag)) + return NULL; + } else { + struct page *page; + + page = dev_alloc_page(); + if (unlikely(!page)) + return NULL; + frag = page_address(page); + } + + *dma_addr = nfp_net_dma_map_rx(dp, frag); + if (dma_mapping_error(dp->dev, *dma_addr)) { + nfp_net_free_frag(frag, dp->xdp_prog); + nn_dp_warn(dp, "Failed to map DMA RX buffer\n"); + return NULL; + } + + return frag; +} + +/** + * nfp_nfd3_rx_give_one() - Put mapped skb on the software and hardware rings + * @dp: NFP Net data path struct + * @rx_ring: RX ring structure + * @frag: page fragment buffer + * @dma_addr: DMA address of skb mapping + */ +static void +nfp_nfd3_rx_give_one(const struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring, + void *frag, dma_addr_t dma_addr) +{ + unsigned int wr_idx; + + wr_idx = D_IDX(rx_ring, rx_ring->wr_p); + + nfp_net_dma_sync_dev_rx(dp, dma_addr); + + /* Stash SKB and DMA address away */ + rx_ring->rxbufs[wr_idx].frag = frag; + rx_ring->rxbufs[wr_idx].dma_addr = dma_addr; + + /* Fill freelist descriptor */ + rx_ring->rxds[wr_idx].fld.reserved = 0; + rx_ring->rxds[wr_idx].fld.meta_len_dd = 0; + nfp_desc_set_dma_addr(&rx_ring->rxds[wr_idx].fld, + dma_addr + dp->rx_dma_off); + + rx_ring->wr_p++; + if (!(rx_ring->wr_p % NFP_NET_FL_BATCH)) { + /* Update write pointer of the freelist queue. Make + * sure all writes are flushed before telling the hardware. + */ + wmb(); + nfp_qcp_wr_ptr_add(rx_ring->qcp_fl, NFP_NET_FL_BATCH); + } +} + +/** + * nfp_nfd3_rx_ring_fill_freelist() - Give buffers from the ring to FW + * @dp: NFP Net data path struct + * @rx_ring: RX ring to fill + */ +void nfp_nfd3_rx_ring_fill_freelist(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring) +{ + unsigned int i; + + if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) + return nfp_net_xsk_rx_ring_fill_freelist(rx_ring); + + for (i = 0; i < rx_ring->cnt - 1; i++) + nfp_nfd3_rx_give_one(dp, rx_ring, rx_ring->rxbufs[i].frag, + rx_ring->rxbufs[i].dma_addr); +} + +/** + * nfp_nfd3_rx_csum_has_errors() - group check if rxd has any csum errors + * @flags: RX descriptor flags field in CPU byte order + */ +static int nfp_nfd3_rx_csum_has_errors(u16 flags) +{ + u16 csum_all_checked, csum_all_ok; + + csum_all_checked = flags & __PCIE_DESC_RX_CSUM_ALL; + csum_all_ok = flags & __PCIE_DESC_RX_CSUM_ALL_OK; + + return csum_all_checked != (csum_all_ok << PCIE_DESC_RX_CSUM_OK_SHIFT); +} + +/** + * nfp_nfd3_rx_csum() - set SKB checksum field based on RX descriptor flags + * @dp: NFP Net data path struct + * @r_vec: per-ring structure + * @rxd: Pointer to RX descriptor + * @meta: Parsed metadata prepend + * @skb: Pointer to SKB + */ +void +nfp_nfd3_rx_csum(const struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + const struct nfp_net_rx_desc *rxd, + const struct nfp_meta_parsed *meta, struct sk_buff *skb) +{ + skb_checksum_none_assert(skb); + + if (!(dp->netdev->features & NETIF_F_RXCSUM)) + return; + + if (meta->csum_type) { + skb->ip_summed = meta->csum_type; + skb->csum = meta->csum; + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->hw_csum_rx_complete++; + u64_stats_update_end(&r_vec->rx_sync); + return; + } + + if (nfp_nfd3_rx_csum_has_errors(le16_to_cpu(rxd->rxd.flags))) { + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->hw_csum_rx_error++; + u64_stats_update_end(&r_vec->rx_sync); + return; + } + + /* Assume that the firmware will never report inner CSUM_OK unless outer + * L4 headers were successfully parsed. FW will always report zero UDP + * checksum as CSUM_OK. + */ + if (rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM_OK || + rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM_OK) { + __skb_incr_checksum_unnecessary(skb); + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->hw_csum_rx_ok++; + u64_stats_update_end(&r_vec->rx_sync); + } + + if (rxd->rxd.flags & PCIE_DESC_RX_I_TCP_CSUM_OK || + rxd->rxd.flags & PCIE_DESC_RX_I_UDP_CSUM_OK) { + __skb_incr_checksum_unnecessary(skb); + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->hw_csum_rx_inner_ok++; + u64_stats_update_end(&r_vec->rx_sync); + } +} + +static void +nfp_nfd3_set_hash(struct net_device *netdev, struct nfp_meta_parsed *meta, + unsigned int type, __be32 *hash) +{ + if (!(netdev->features & NETIF_F_RXHASH)) + return; + + switch (type) { + case NFP_NET_RSS_IPV4: + case NFP_NET_RSS_IPV6: + case NFP_NET_RSS_IPV6_EX: + meta->hash_type = PKT_HASH_TYPE_L3; + break; + default: + meta->hash_type = PKT_HASH_TYPE_L4; + break; + } + + meta->hash = get_unaligned_be32(hash); +} + +static void +nfp_nfd3_set_hash_desc(struct net_device *netdev, struct nfp_meta_parsed *meta, + void *data, struct nfp_net_rx_desc *rxd) +{ + struct nfp_net_rx_hash *rx_hash = data; + + if (!(rxd->rxd.flags & PCIE_DESC_RX_RSS)) + return; + + nfp_nfd3_set_hash(netdev, meta, get_unaligned_be32(&rx_hash->hash_type), + &rx_hash->hash); +} + +bool +nfp_nfd3_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta, + void *data, void *pkt, unsigned int pkt_len, int meta_len) +{ + u32 meta_info; + + meta_info = get_unaligned_be32(data); + data += 4; + + while (meta_info) { + switch (meta_info & NFP_NET_META_FIELD_MASK) { + case NFP_NET_META_HASH: + meta_info >>= NFP_NET_META_FIELD_SIZE; + nfp_nfd3_set_hash(netdev, meta, + meta_info & NFP_NET_META_FIELD_MASK, + (__be32 *)data); + data += 4; + break; + case NFP_NET_META_MARK: + meta->mark = get_unaligned_be32(data); + data += 4; + break; + case NFP_NET_META_PORTID: + meta->portid = get_unaligned_be32(data); + data += 4; + break; + case NFP_NET_META_CSUM: + meta->csum_type = CHECKSUM_COMPLETE; + meta->csum = + (__force __wsum)__get_unaligned_cpu32(data); + data += 4; + break; + case NFP_NET_META_RESYNC_INFO: + if (nfp_net_tls_rx_resync_req(netdev, data, pkt, + pkt_len)) + return false; + data += sizeof(struct nfp_net_tls_resync_req); + break; + default: + return true; + } + + meta_info >>= NFP_NET_META_FIELD_SIZE; + } + + return data != pkt; +} + +static void +nfp_nfd3_rx_drop(const struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + struct nfp_net_rx_ring *rx_ring, struct nfp_net_rx_buf *rxbuf, + struct sk_buff *skb) +{ + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->rx_drops++; + /* If we have both skb and rxbuf the replacement buffer allocation + * must have failed, count this as an alloc failure. + */ + if (skb && rxbuf) + r_vec->rx_replace_buf_alloc_fail++; + u64_stats_update_end(&r_vec->rx_sync); + + /* skb is build based on the frag, free_skb() would free the frag + * so to be able to reuse it we need an extra ref. + */ + if (skb && rxbuf && skb->head == rxbuf->frag) + page_ref_inc(virt_to_head_page(rxbuf->frag)); + if (rxbuf) + nfp_nfd3_rx_give_one(dp, rx_ring, rxbuf->frag, rxbuf->dma_addr); + if (skb) + dev_kfree_skb_any(skb); +} + +static bool +nfp_nfd3_tx_xdp_buf(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring, + struct nfp_net_tx_ring *tx_ring, + struct nfp_net_rx_buf *rxbuf, unsigned int dma_off, + unsigned int pkt_len, bool *completed) +{ + unsigned int dma_map_sz = dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA; + struct nfp_nfd3_tx_buf *txbuf; + struct nfp_nfd3_tx_desc *txd; + int wr_idx; + + /* Reject if xdp_adjust_tail grow packet beyond DMA area */ + if (pkt_len + dma_off > dma_map_sz) + return false; + + if (unlikely(nfp_net_tx_full(tx_ring, 1))) { + if (!*completed) { + nfp_nfd3_xdp_complete(tx_ring); + *completed = true; + } + + if (unlikely(nfp_net_tx_full(tx_ring, 1))) { + nfp_nfd3_rx_drop(dp, rx_ring->r_vec, rx_ring, rxbuf, + NULL); + return false; + } + } + + wr_idx = D_IDX(tx_ring, tx_ring->wr_p); + + /* Stash the soft descriptor of the head then initialize it */ + txbuf = &tx_ring->txbufs[wr_idx]; + + nfp_nfd3_rx_give_one(dp, rx_ring, txbuf->frag, txbuf->dma_addr); + + txbuf->frag = rxbuf->frag; + txbuf->dma_addr = rxbuf->dma_addr; + txbuf->fidx = -1; + txbuf->pkt_cnt = 1; + txbuf->real_len = pkt_len; + + dma_sync_single_for_device(dp->dev, rxbuf->dma_addr + dma_off, + pkt_len, DMA_BIDIRECTIONAL); + + /* Build TX descriptor */ + txd = &tx_ring->txds[wr_idx]; + txd->offset_eop = NFD3_DESC_TX_EOP; + txd->dma_len = cpu_to_le16(pkt_len); + nfp_desc_set_dma_addr(txd, rxbuf->dma_addr + dma_off); + txd->data_len = cpu_to_le16(pkt_len); + + txd->flags = 0; + txd->mss = 0; + txd->lso_hdrlen = 0; + + tx_ring->wr_p++; + tx_ring->wr_ptr_add++; + return true; +} + +/** + * nfp_nfd3_rx() - receive up to @budget packets on @rx_ring + * @rx_ring: RX ring to receive from + * @budget: NAPI budget + * + * Note, this function is separated out from the napi poll function to + * more cleanly separate packet receive code from other bookkeeping + * functions performed in the napi poll function. + * + * Return: Number of packets received. + */ +static int nfp_nfd3_rx(struct nfp_net_rx_ring *rx_ring, int budget) +{ + struct nfp_net_r_vector *r_vec = rx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + struct nfp_net_tx_ring *tx_ring; + struct bpf_prog *xdp_prog; + bool xdp_tx_cmpl = false; + unsigned int true_bufsz; + struct sk_buff *skb; + int pkts_polled = 0; + struct xdp_buff xdp; + int idx; + + xdp_prog = READ_ONCE(dp->xdp_prog); + true_bufsz = xdp_prog ? PAGE_SIZE : dp->fl_bufsz; + xdp_init_buff(&xdp, PAGE_SIZE - NFP_NET_RX_BUF_HEADROOM, + &rx_ring->xdp_rxq); + tx_ring = r_vec->xdp_ring; + + while (pkts_polled < budget) { + unsigned int meta_len, data_len, meta_off, pkt_len, pkt_off; + struct nfp_net_rx_buf *rxbuf; + struct nfp_net_rx_desc *rxd; + struct nfp_meta_parsed meta; + bool redir_egress = false; + struct net_device *netdev; + dma_addr_t new_dma_addr; + u32 meta_len_xdp = 0; + void *new_frag; + + idx = D_IDX(rx_ring, rx_ring->rd_p); + + rxd = &rx_ring->rxds[idx]; + if (!(rxd->rxd.meta_len_dd & PCIE_DESC_RX_DD)) + break; + + /* Memory barrier to ensure that we won't do other reads + * before the DD bit. + */ + dma_rmb(); + + memset(&meta, 0, sizeof(meta)); + + rx_ring->rd_p++; + pkts_polled++; + + rxbuf = &rx_ring->rxbufs[idx]; + /* < meta_len > + * <-- [rx_offset] --> + * --------------------------------------------------------- + * | [XX] | metadata | packet | XXXX | + * --------------------------------------------------------- + * <---------------- data_len ---------------> + * + * The rx_offset is fixed for all packets, the meta_len can vary + * on a packet by packet basis. If rx_offset is set to zero + * (_RX_OFFSET_DYNAMIC) metadata starts at the beginning of the + * buffer and is immediately followed by the packet (no [XX]). + */ + meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK; + data_len = le16_to_cpu(rxd->rxd.data_len); + pkt_len = data_len - meta_len; + + pkt_off = NFP_NET_RX_BUF_HEADROOM + dp->rx_dma_off; + if (dp->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC) + pkt_off += meta_len; + else + pkt_off += dp->rx_offset; + meta_off = pkt_off - meta_len; + + /* Stats update */ + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->rx_pkts++; + r_vec->rx_bytes += pkt_len; + u64_stats_update_end(&r_vec->rx_sync); + + if (unlikely(meta_len > NFP_NET_MAX_PREPEND || + (dp->rx_offset && meta_len > dp->rx_offset))) { + nn_dp_warn(dp, "oversized RX packet metadata %u\n", + meta_len); + nfp_nfd3_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); + continue; + } + + nfp_net_dma_sync_cpu_rx(dp, rxbuf->dma_addr + meta_off, + data_len); + + if (!dp->chained_metadata_format) { + nfp_nfd3_set_hash_desc(dp->netdev, &meta, + rxbuf->frag + meta_off, rxd); + } else if (meta_len) { + if (unlikely(nfp_nfd3_parse_meta(dp->netdev, &meta, + rxbuf->frag + meta_off, + rxbuf->frag + pkt_off, + pkt_len, meta_len))) { + nn_dp_warn(dp, "invalid RX packet metadata\n"); + nfp_nfd3_rx_drop(dp, r_vec, rx_ring, rxbuf, + NULL); + continue; + } + } + + if (xdp_prog && !meta.portid) { + void *orig_data = rxbuf->frag + pkt_off; + unsigned int dma_off; + int act; + + xdp_prepare_buff(&xdp, + rxbuf->frag + NFP_NET_RX_BUF_HEADROOM, + pkt_off - NFP_NET_RX_BUF_HEADROOM, + pkt_len, true); + + act = bpf_prog_run_xdp(xdp_prog, &xdp); + + pkt_len = xdp.data_end - xdp.data; + pkt_off += xdp.data - orig_data; + + switch (act) { + case XDP_PASS: + meta_len_xdp = xdp.data - xdp.data_meta; + break; + case XDP_TX: + dma_off = pkt_off - NFP_NET_RX_BUF_HEADROOM; + if (unlikely(!nfp_nfd3_tx_xdp_buf(dp, rx_ring, + tx_ring, + rxbuf, + dma_off, + pkt_len, + &xdp_tx_cmpl))) + trace_xdp_exception(dp->netdev, + xdp_prog, act); + continue; + default: + bpf_warn_invalid_xdp_action(dp->netdev, xdp_prog, act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(dp->netdev, xdp_prog, act); + fallthrough; + case XDP_DROP: + nfp_nfd3_rx_give_one(dp, rx_ring, rxbuf->frag, + rxbuf->dma_addr); + continue; + } + } + + if (likely(!meta.portid)) { + netdev = dp->netdev; + } else if (meta.portid == NFP_META_PORT_ID_CTRL) { + struct nfp_net *nn = netdev_priv(dp->netdev); + + nfp_app_ctrl_rx_raw(nn->app, rxbuf->frag + pkt_off, + pkt_len); + nfp_nfd3_rx_give_one(dp, rx_ring, rxbuf->frag, + rxbuf->dma_addr); + continue; + } else { + struct nfp_net *nn; + + nn = netdev_priv(dp->netdev); + netdev = nfp_app_dev_get(nn->app, meta.portid, + &redir_egress); + if (unlikely(!netdev)) { + nfp_nfd3_rx_drop(dp, r_vec, rx_ring, rxbuf, + NULL); + continue; + } + + if (nfp_netdev_is_nfp_repr(netdev)) + nfp_repr_inc_rx_stats(netdev, pkt_len); + } + + skb = build_skb(rxbuf->frag, true_bufsz); + if (unlikely(!skb)) { + nfp_nfd3_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); + continue; + } + new_frag = nfp_nfd3_napi_alloc_one(dp, &new_dma_addr); + if (unlikely(!new_frag)) { + nfp_nfd3_rx_drop(dp, r_vec, rx_ring, rxbuf, skb); + continue; + } + + nfp_net_dma_unmap_rx(dp, rxbuf->dma_addr); + + nfp_nfd3_rx_give_one(dp, rx_ring, new_frag, new_dma_addr); + + skb_reserve(skb, pkt_off); + skb_put(skb, pkt_len); + + skb->mark = meta.mark; + skb_set_hash(skb, meta.hash, meta.hash_type); + + skb_record_rx_queue(skb, rx_ring->idx); + skb->protocol = eth_type_trans(skb, netdev); + + nfp_nfd3_rx_csum(dp, r_vec, rxd, &meta, skb); + +#ifdef CONFIG_TLS_DEVICE + if (rxd->rxd.flags & PCIE_DESC_RX_DECRYPTED) { + skb->decrypted = true; + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->hw_tls_rx++; + u64_stats_update_end(&r_vec->rx_sync); + } +#endif + + if (rxd->rxd.flags & PCIE_DESC_RX_VLAN) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), + le16_to_cpu(rxd->rxd.vlan)); + if (meta_len_xdp) + skb_metadata_set(skb, meta_len_xdp); + + if (likely(!redir_egress)) { + napi_gro_receive(&rx_ring->r_vec->napi, skb); + } else { + skb->dev = netdev; + skb_reset_network_header(skb); + __skb_push(skb, ETH_HLEN); + dev_queue_xmit(skb); + } + } + + if (xdp_prog) { + if (tx_ring->wr_ptr_add) + nfp_net_tx_xmit_more_flush(tx_ring); + else if (unlikely(tx_ring->wr_p != tx_ring->rd_p) && + !xdp_tx_cmpl) + if (!nfp_nfd3_xdp_complete(tx_ring)) + pkts_polled = budget; + } + + return pkts_polled; +} + +/** + * nfp_nfd3_poll() - napi poll function + * @napi: NAPI structure + * @budget: NAPI budget + * + * Return: number of packets polled. + */ +int nfp_nfd3_poll(struct napi_struct *napi, int budget) +{ + struct nfp_net_r_vector *r_vec = + container_of(napi, struct nfp_net_r_vector, napi); + unsigned int pkts_polled = 0; + + if (r_vec->tx_ring) + nfp_nfd3_tx_complete(r_vec->tx_ring, budget); + if (r_vec->rx_ring) + pkts_polled = nfp_nfd3_rx(r_vec->rx_ring, budget); + + if (pkts_polled < budget) + if (napi_complete_done(napi, pkts_polled)) + nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry); + + if (r_vec->nfp_net->rx_coalesce_adapt_on && r_vec->rx_ring) { + struct dim_sample dim_sample = {}; + unsigned int start; + u64 pkts, bytes; + + do { + start = u64_stats_fetch_begin(&r_vec->rx_sync); + pkts = r_vec->rx_pkts; + bytes = r_vec->rx_bytes; + } while (u64_stats_fetch_retry(&r_vec->rx_sync, start)); + + dim_update_sample(r_vec->event_ctr, pkts, bytes, &dim_sample); + net_dim(&r_vec->rx_dim, dim_sample); + } + + if (r_vec->nfp_net->tx_coalesce_adapt_on && r_vec->tx_ring) { + struct dim_sample dim_sample = {}; + unsigned int start; + u64 pkts, bytes; + + do { + start = u64_stats_fetch_begin(&r_vec->tx_sync); + pkts = r_vec->tx_pkts; + bytes = r_vec->tx_bytes; + } while (u64_stats_fetch_retry(&r_vec->tx_sync, start)); + + dim_update_sample(r_vec->event_ctr, pkts, bytes, &dim_sample); + net_dim(&r_vec->tx_dim, dim_sample); + } + + return pkts_polled; +} + +/* Control device data path + */ + +static bool +nfp_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, + struct sk_buff *skb, bool old) +{ + unsigned int real_len = skb->len, meta_len = 0; + struct nfp_net_tx_ring *tx_ring; + struct nfp_nfd3_tx_buf *txbuf; + struct nfp_nfd3_tx_desc *txd; + struct nfp_net_dp *dp; + dma_addr_t dma_addr; + int wr_idx; + + dp = &r_vec->nfp_net->dp; + tx_ring = r_vec->tx_ring; + + if (WARN_ON_ONCE(skb_shinfo(skb)->nr_frags)) { + nn_dp_warn(dp, "Driver's CTRL TX does not implement gather\n"); + goto err_free; + } + + if (unlikely(nfp_net_tx_full(tx_ring, 1))) { + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_busy++; + u64_stats_update_end(&r_vec->tx_sync); + if (!old) + __skb_queue_tail(&r_vec->queue, skb); + else + __skb_queue_head(&r_vec->queue, skb); + return true; + } + + if (nfp_app_ctrl_has_meta(nn->app)) { + if (unlikely(skb_headroom(skb) < 8)) { + nn_dp_warn(dp, "CTRL TX on skb without headroom\n"); + goto err_free; + } + meta_len = 8; + put_unaligned_be32(NFP_META_PORT_ID_CTRL, skb_push(skb, 4)); + put_unaligned_be32(NFP_NET_META_PORTID, skb_push(skb, 4)); + } + + /* Start with the head skbuf */ + dma_addr = dma_map_single(dp->dev, skb->data, skb_headlen(skb), + DMA_TO_DEVICE); + if (dma_mapping_error(dp->dev, dma_addr)) + goto err_dma_warn; + + wr_idx = D_IDX(tx_ring, tx_ring->wr_p); + + /* Stash the soft descriptor of the head then initialize it */ + txbuf = &tx_ring->txbufs[wr_idx]; + txbuf->skb = skb; + txbuf->dma_addr = dma_addr; + txbuf->fidx = -1; + txbuf->pkt_cnt = 1; + txbuf->real_len = real_len; + + /* Build TX descriptor */ + txd = &tx_ring->txds[wr_idx]; + txd->offset_eop = meta_len | NFD3_DESC_TX_EOP; + txd->dma_len = cpu_to_le16(skb_headlen(skb)); + nfp_desc_set_dma_addr(txd, dma_addr); + txd->data_len = cpu_to_le16(skb->len); + + txd->flags = 0; + txd->mss = 0; + txd->lso_hdrlen = 0; + + tx_ring->wr_p++; + tx_ring->wr_ptr_add++; + nfp_net_tx_xmit_more_flush(tx_ring); + + return false; + +err_dma_warn: + nn_dp_warn(dp, "Failed to DMA map TX CTRL buffer\n"); +err_free: + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_errors++; + u64_stats_update_end(&r_vec->tx_sync); + dev_kfree_skb_any(skb); + return false; +} + +bool __nfp_nfd3_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) +{ + struct nfp_net_r_vector *r_vec = &nn->r_vecs[0]; + + return nfp_ctrl_tx_one(nn, r_vec, skb, false); +} + +bool nfp_nfd3_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) +{ + struct nfp_net_r_vector *r_vec = &nn->r_vecs[0]; + bool ret; + + spin_lock_bh(&r_vec->lock); + ret = nfp_ctrl_tx_one(nn, r_vec, skb, false); + spin_unlock_bh(&r_vec->lock); + + return ret; +} + +static void __nfp_ctrl_tx_queued(struct nfp_net_r_vector *r_vec) +{ + struct sk_buff *skb; + + while ((skb = __skb_dequeue(&r_vec->queue))) + if (nfp_ctrl_tx_one(r_vec->nfp_net, r_vec, skb, true)) + return; +} + +static bool +nfp_ctrl_meta_ok(struct nfp_net *nn, void *data, unsigned int meta_len) +{ + u32 meta_type, meta_tag; + + if (!nfp_app_ctrl_has_meta(nn->app)) + return !meta_len; + + if (meta_len != 8) + return false; + + meta_type = get_unaligned_be32(data); + meta_tag = get_unaligned_be32(data + 4); + + return (meta_type == NFP_NET_META_PORTID && + meta_tag == NFP_META_PORT_ID_CTRL); +} + +static bool +nfp_ctrl_rx_one(struct nfp_net *nn, struct nfp_net_dp *dp, + struct nfp_net_r_vector *r_vec, struct nfp_net_rx_ring *rx_ring) +{ + unsigned int meta_len, data_len, meta_off, pkt_len, pkt_off; + struct nfp_net_rx_buf *rxbuf; + struct nfp_net_rx_desc *rxd; + dma_addr_t new_dma_addr; + struct sk_buff *skb; + void *new_frag; + int idx; + + idx = D_IDX(rx_ring, rx_ring->rd_p); + + rxd = &rx_ring->rxds[idx]; + if (!(rxd->rxd.meta_len_dd & PCIE_DESC_RX_DD)) + return false; + + /* Memory barrier to ensure that we won't do other reads + * before the DD bit. + */ + dma_rmb(); + + rx_ring->rd_p++; + + rxbuf = &rx_ring->rxbufs[idx]; + meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK; + data_len = le16_to_cpu(rxd->rxd.data_len); + pkt_len = data_len - meta_len; + + pkt_off = NFP_NET_RX_BUF_HEADROOM + dp->rx_dma_off; + if (dp->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC) + pkt_off += meta_len; + else + pkt_off += dp->rx_offset; + meta_off = pkt_off - meta_len; + + /* Stats update */ + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->rx_pkts++; + r_vec->rx_bytes += pkt_len; + u64_stats_update_end(&r_vec->rx_sync); + + nfp_net_dma_sync_cpu_rx(dp, rxbuf->dma_addr + meta_off, data_len); + + if (unlikely(!nfp_ctrl_meta_ok(nn, rxbuf->frag + meta_off, meta_len))) { + nn_dp_warn(dp, "incorrect metadata for ctrl packet (%d)\n", + meta_len); + nfp_nfd3_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); + return true; + } + + skb = build_skb(rxbuf->frag, dp->fl_bufsz); + if (unlikely(!skb)) { + nfp_nfd3_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); + return true; + } + new_frag = nfp_nfd3_napi_alloc_one(dp, &new_dma_addr); + if (unlikely(!new_frag)) { + nfp_nfd3_rx_drop(dp, r_vec, rx_ring, rxbuf, skb); + return true; + } + + nfp_net_dma_unmap_rx(dp, rxbuf->dma_addr); + + nfp_nfd3_rx_give_one(dp, rx_ring, new_frag, new_dma_addr); + + skb_reserve(skb, pkt_off); + skb_put(skb, pkt_len); + + nfp_app_ctrl_rx(nn->app, skb); + + return true; +} + +static bool nfp_ctrl_rx(struct nfp_net_r_vector *r_vec) +{ + struct nfp_net_rx_ring *rx_ring = r_vec->rx_ring; + struct nfp_net *nn = r_vec->nfp_net; + struct nfp_net_dp *dp = &nn->dp; + unsigned int budget = 512; + + while (nfp_ctrl_rx_one(nn, dp, r_vec, rx_ring) && budget--) + continue; + + return budget; +} + +void nfp_nfd3_ctrl_poll(struct tasklet_struct *t) +{ + struct nfp_net_r_vector *r_vec = from_tasklet(r_vec, t, tasklet); + + spin_lock(&r_vec->lock); + nfp_nfd3_tx_complete(r_vec->tx_ring, 0); + __nfp_ctrl_tx_queued(r_vec); + spin_unlock(&r_vec->lock); + + if (nfp_ctrl_rx(r_vec)) { + nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry); + } else { + tasklet_schedule(&r_vec->tasklet); + nn_dp_warn(&r_vec->nfp_net->dp, + "control message budget exceeded!\n"); + } +} diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h b/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h new file mode 100644 index 000000000000..0bd597ad6c6e --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h @@ -0,0 +1,126 @@ +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ +/* Copyright (C) 2015-2019 Netronome Systems, Inc. */ + +#ifndef _NFP_DP_NFD3_H_ +#define _NFP_DP_NFD3_H_ + +struct sk_buff; +struct net_device; + +/* TX descriptor format */ + +#define NFD3_DESC_TX_EOP BIT(7) +#define NFD3_DESC_TX_OFFSET_MASK GENMASK(6, 0) +#define NFD3_DESC_TX_MSS_MASK GENMASK(13, 0) + +/* Flags in the host TX descriptor */ +#define NFD3_DESC_TX_CSUM BIT(7) +#define NFD3_DESC_TX_IP4_CSUM BIT(6) +#define NFD3_DESC_TX_TCP_CSUM BIT(5) +#define NFD3_DESC_TX_UDP_CSUM BIT(4) +#define NFD3_DESC_TX_VLAN BIT(3) +#define NFD3_DESC_TX_LSO BIT(2) +#define NFD3_DESC_TX_ENCAP BIT(1) +#define NFD3_DESC_TX_O_IP4_CSUM BIT(0) + +struct nfp_nfd3_tx_desc { + union { + struct { + u8 dma_addr_hi; /* High bits of host buf address */ + __le16 dma_len; /* Length to DMA for this desc */ + u8 offset_eop; /* Offset in buf where pkt starts + + * highest bit is eop flag. + */ + __le32 dma_addr_lo; /* Low 32bit of host buf addr */ + + __le16 mss; /* MSS to be used for LSO */ + u8 lso_hdrlen; /* LSO, TCP payload offset */ + u8 flags; /* TX Flags, see @NFD3_DESC_TX_* */ + union { + struct { + u8 l3_offset; /* L3 header offset */ + u8 l4_offset; /* L4 header offset */ + }; + __le16 vlan; /* VLAN tag to add if indicated */ + }; + __le16 data_len; /* Length of frame + meta data */ + } __packed; + __le32 vals[4]; + __le64 vals8[2]; + }; +}; + +/** + * struct nfp_nfd3_tx_buf - software TX buffer descriptor + * @skb: normal ring, sk_buff associated with this buffer + * @frag: XDP ring, page frag associated with this buffer + * @xdp: XSK buffer pool handle (for AF_XDP) + * @dma_addr: DMA mapping address of the buffer + * @fidx: Fragment index (-1 for the head and [0..nr_frags-1] for frags) + * @pkt_cnt: Number of packets to be produced out of the skb associated + * with this buffer (valid only on the head's buffer). + * Will be 1 for all non-TSO packets. + * @is_xsk_tx: Flag if buffer is a RX buffer after a XDP_TX action and not a + * buffer from the TX queue (for AF_XDP). + * @real_len: Number of bytes which to be produced out of the skb (valid only + * on the head's buffer). Equal to skb->len for non-TSO packets. + */ +struct nfp_nfd3_tx_buf { + union { + struct sk_buff *skb; + void *frag; + struct xdp_buff *xdp; + }; + dma_addr_t dma_addr; + union { + struct { + short int fidx; + u16 pkt_cnt; + }; + struct { + bool is_xsk_tx; + }; + }; + u32 real_len; +}; + +void +nfp_nfd3_rx_csum(const struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + const struct nfp_net_rx_desc *rxd, + const struct nfp_meta_parsed *meta, struct sk_buff *skb); +bool +nfp_nfd3_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta, + void *data, void *pkt, unsigned int pkt_len, int meta_len); +void nfp_nfd3_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget); +int nfp_nfd3_poll(struct napi_struct *napi, int budget); +void nfp_nfd3_ctrl_poll(struct tasklet_struct *t); +void nfp_nfd3_rx_ring_fill_freelist(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring); +void nfp_nfd3_xsk_tx_free(struct nfp_nfd3_tx_buf *txbuf); +int nfp_nfd3_xsk_poll(struct napi_struct *napi, int budget); + +void +nfp_nfd3_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); +void +nfp_nfd3_rx_ring_fill_freelist(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring); +int +nfp_nfd3_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); +void +nfp_nfd3_tx_ring_free(struct nfp_net_tx_ring *tx_ring); +int +nfp_nfd3_tx_ring_bufs_alloc(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring); +void +nfp_nfd3_tx_ring_bufs_free(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring); +void +nfp_nfd3_print_tx_descs(struct seq_file *file, + struct nfp_net_r_vector *r_vec, + struct nfp_net_tx_ring *tx_ring, + u32 d_rd_p, u32 d_wr_p); +netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev); +bool nfp_nfd3_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb); +bool __nfp_nfd3_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb); + +#endif diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/rings.c b/drivers/net/ethernet/netronome/nfp/nfd3/rings.c new file mode 100644 index 000000000000..4c6aaebb7522 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfd3/rings.c @@ -0,0 +1,243 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2015-2019 Netronome Systems, Inc. */ + +#include + +#include "../nfp_net.h" +#include "../nfp_net_dp.h" +#include "../nfp_net_xsk.h" +#include "nfd3.h" + +static void nfp_nfd3_xsk_tx_bufs_free(struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_nfd3_tx_buf *txbuf; + unsigned int idx; + + while (tx_ring->rd_p != tx_ring->wr_p) { + idx = D_IDX(tx_ring, tx_ring->rd_p); + txbuf = &tx_ring->txbufs[idx]; + + txbuf->real_len = 0; + + tx_ring->qcp_rd_p++; + tx_ring->rd_p++; + + if (tx_ring->r_vec->xsk_pool) { + if (txbuf->is_xsk_tx) + nfp_nfd3_xsk_tx_free(txbuf); + + xsk_tx_completed(tx_ring->r_vec->xsk_pool, 1); + } + } +} + +/** + * nfp_nfd3_tx_ring_reset() - Free any untransmitted buffers and reset pointers + * @dp: NFP Net data path struct + * @tx_ring: TX ring structure + * + * Assumes that the device is stopped, must be idempotent. + */ +void +nfp_nfd3_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + struct netdev_queue *nd_q; + const skb_frag_t *frag; + + while (!tx_ring->is_xdp && tx_ring->rd_p != tx_ring->wr_p) { + struct nfp_nfd3_tx_buf *tx_buf; + struct sk_buff *skb; + int idx, nr_frags; + + idx = D_IDX(tx_ring, tx_ring->rd_p); + tx_buf = &tx_ring->txbufs[idx]; + + skb = tx_ring->txbufs[idx].skb; + nr_frags = skb_shinfo(skb)->nr_frags; + + if (tx_buf->fidx == -1) { + /* unmap head */ + dma_unmap_single(dp->dev, tx_buf->dma_addr, + skb_headlen(skb), DMA_TO_DEVICE); + } else { + /* unmap fragment */ + frag = &skb_shinfo(skb)->frags[tx_buf->fidx]; + dma_unmap_page(dp->dev, tx_buf->dma_addr, + skb_frag_size(frag), DMA_TO_DEVICE); + } + + /* check for last gather fragment */ + if (tx_buf->fidx == nr_frags - 1) + dev_kfree_skb_any(skb); + + tx_buf->dma_addr = 0; + tx_buf->skb = NULL; + tx_buf->fidx = -2; + + tx_ring->qcp_rd_p++; + tx_ring->rd_p++; + } + + if (tx_ring->is_xdp) + nfp_nfd3_xsk_tx_bufs_free(tx_ring); + + memset(tx_ring->txds, 0, tx_ring->size); + tx_ring->wr_p = 0; + tx_ring->rd_p = 0; + tx_ring->qcp_rd_p = 0; + tx_ring->wr_ptr_add = 0; + + if (tx_ring->is_xdp || !dp->netdev) + return; + + nd_q = netdev_get_tx_queue(dp->netdev, tx_ring->idx); + netdev_tx_reset_queue(nd_q); +} + +/** + * nfp_nfd3_tx_ring_free() - Free resources allocated to a TX ring + * @tx_ring: TX ring to free + */ +void nfp_nfd3_tx_ring_free(struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + + kvfree(tx_ring->txbufs); + + if (tx_ring->txds) + dma_free_coherent(dp->dev, tx_ring->size, + tx_ring->txds, tx_ring->dma); + + tx_ring->cnt = 0; + tx_ring->txbufs = NULL; + tx_ring->txds = NULL; + tx_ring->dma = 0; + tx_ring->size = 0; +} + +/** + * nfp_nfd3_tx_ring_alloc() - Allocate resource for a TX ring + * @dp: NFP Net data path struct + * @tx_ring: TX Ring structure to allocate + * + * Return: 0 on success, negative errno otherwise. + */ +int +nfp_nfd3_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + + tx_ring->cnt = dp->txd_cnt; + + tx_ring->size = array_size(tx_ring->cnt, sizeof(*tx_ring->txds)); + tx_ring->txds = dma_alloc_coherent(dp->dev, tx_ring->size, + &tx_ring->dma, + GFP_KERNEL | __GFP_NOWARN); + if (!tx_ring->txds) { + netdev_warn(dp->netdev, "failed to allocate TX descriptor ring memory, requested descriptor count: %d, consider lowering descriptor count\n", + tx_ring->cnt); + goto err_alloc; + } + + tx_ring->txbufs = kvcalloc(tx_ring->cnt, sizeof(*tx_ring->txbufs), + GFP_KERNEL); + if (!tx_ring->txbufs) + goto err_alloc; + + if (!tx_ring->is_xdp && dp->netdev) + netif_set_xps_queue(dp->netdev, &r_vec->affinity_mask, + tx_ring->idx); + + return 0; + +err_alloc: + nfp_nfd3_tx_ring_free(tx_ring); + return -ENOMEM; +} + +void +nfp_nfd3_tx_ring_bufs_free(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring) +{ + unsigned int i; + + if (!tx_ring->is_xdp) + return; + + for (i = 0; i < tx_ring->cnt; i++) { + if (!tx_ring->txbufs[i].frag) + return; + + nfp_net_dma_unmap_rx(dp, tx_ring->txbufs[i].dma_addr); + __free_page(virt_to_page(tx_ring->txbufs[i].frag)); + } +} + +int +nfp_nfd3_tx_ring_bufs_alloc(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_nfd3_tx_buf *txbufs = tx_ring->txbufs; + unsigned int i; + + if (!tx_ring->is_xdp) + return 0; + + for (i = 0; i < tx_ring->cnt; i++) { + txbufs[i].frag = nfp_net_rx_alloc_one(dp, &txbufs[i].dma_addr); + if (!txbufs[i].frag) { + nfp_nfd3_tx_ring_bufs_free(dp, tx_ring); + return -ENOMEM; + } + } + + return 0; +} + +void +nfp_nfd3_print_tx_descs(struct seq_file *file, + struct nfp_net_r_vector *r_vec, + struct nfp_net_tx_ring *tx_ring, + u32 d_rd_p, u32 d_wr_p) +{ + struct nfp_nfd3_tx_desc *txd; + u32 txd_cnt = tx_ring->cnt; + int i; + + for (i = 0; i < txd_cnt; i++) { + struct xdp_buff *xdp; + struct sk_buff *skb; + + txd = &tx_ring->txds[i]; + seq_printf(file, "%04d: 0x%08x 0x%08x 0x%08x 0x%08x", i, + txd->vals[0], txd->vals[1], + txd->vals[2], txd->vals[3]); + + if (!tx_ring->is_xdp) { + skb = READ_ONCE(tx_ring->txbufs[i].skb); + if (skb) + seq_printf(file, " skb->head=%p skb->data=%p", + skb->head, skb->data); + } else { + xdp = READ_ONCE(tx_ring->txbufs[i].xdp); + if (xdp) + seq_printf(file, " xdp->data=%p", xdp->data); + } + + if (tx_ring->txbufs[i].dma_addr) + seq_printf(file, " dma_addr=%pad", + &tx_ring->txbufs[i].dma_addr); + + if (i == tx_ring->rd_p % txd_cnt) + seq_puts(file, " H_RD"); + if (i == tx_ring->wr_p % txd_cnt) + seq_puts(file, " H_WR"); + if (i == d_rd_p % txd_cnt) + seq_puts(file, " D_RD"); + if (i == d_wr_p % txd_cnt) + seq_puts(file, " D_WR"); + + seq_putc(file, '\n'); + } +} diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c b/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c new file mode 100644 index 000000000000..c16c4b42ecfd --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c @@ -0,0 +1,408 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2018 Netronome Systems, Inc */ +/* Copyright (C) 2021 Corigine, Inc */ + +#include +#include + +#include "../nfp_app.h" +#include "../nfp_net.h" +#include "../nfp_net_dp.h" +#include "../nfp_net_xsk.h" +#include "nfd3.h" + +static bool +nfp_nfd3_xsk_tx_xdp(const struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + struct nfp_net_rx_ring *rx_ring, + struct nfp_net_tx_ring *tx_ring, + struct nfp_net_xsk_rx_buf *xrxbuf, unsigned int pkt_len, + int pkt_off) +{ + struct xsk_buff_pool *pool = r_vec->xsk_pool; + struct nfp_nfd3_tx_buf *txbuf; + struct nfp_nfd3_tx_desc *txd; + unsigned int wr_idx; + + if (nfp_net_tx_space(tx_ring) < 1) + return false; + + xsk_buff_raw_dma_sync_for_device(pool, xrxbuf->dma_addr + pkt_off, + pkt_len); + + wr_idx = D_IDX(tx_ring, tx_ring->wr_p); + + txbuf = &tx_ring->txbufs[wr_idx]; + txbuf->xdp = xrxbuf->xdp; + txbuf->real_len = pkt_len; + txbuf->is_xsk_tx = true; + + /* Build TX descriptor */ + txd = &tx_ring->txds[wr_idx]; + txd->offset_eop = NFD3_DESC_TX_EOP; + txd->dma_len = cpu_to_le16(pkt_len); + nfp_desc_set_dma_addr(txd, xrxbuf->dma_addr + pkt_off); + txd->data_len = cpu_to_le16(pkt_len); + + txd->flags = 0; + txd->mss = 0; + txd->lso_hdrlen = 0; + + tx_ring->wr_ptr_add++; + tx_ring->wr_p++; + + return true; +} + +static void nfp_nfd3_xsk_rx_skb(struct nfp_net_rx_ring *rx_ring, + const struct nfp_net_rx_desc *rxd, + struct nfp_net_xsk_rx_buf *xrxbuf, + const struct nfp_meta_parsed *meta, + unsigned int pkt_len, + bool meta_xdp, + unsigned int *skbs_polled) +{ + struct nfp_net_r_vector *r_vec = rx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + struct net_device *netdev; + struct sk_buff *skb; + + if (likely(!meta->portid)) { + netdev = dp->netdev; + } else { + struct nfp_net *nn = netdev_priv(dp->netdev); + + netdev = nfp_app_dev_get(nn->app, meta->portid, NULL); + if (unlikely(!netdev)) { + nfp_net_xsk_rx_drop(r_vec, xrxbuf); + return; + } + nfp_repr_inc_rx_stats(netdev, pkt_len); + } + + skb = napi_alloc_skb(&r_vec->napi, pkt_len); + if (!skb) { + nfp_net_xsk_rx_drop(r_vec, xrxbuf); + return; + } + memcpy(skb_put(skb, pkt_len), xrxbuf->xdp->data, pkt_len); + + skb->mark = meta->mark; + skb_set_hash(skb, meta->hash, meta->hash_type); + + skb_record_rx_queue(skb, rx_ring->idx); + skb->protocol = eth_type_trans(skb, netdev); + + nfp_nfd3_rx_csum(dp, r_vec, rxd, meta, skb); + + if (rxd->rxd.flags & PCIE_DESC_RX_VLAN) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), + le16_to_cpu(rxd->rxd.vlan)); + if (meta_xdp) + skb_metadata_set(skb, + xrxbuf->xdp->data - xrxbuf->xdp->data_meta); + + napi_gro_receive(&rx_ring->r_vec->napi, skb); + + nfp_net_xsk_rx_free(xrxbuf); + + (*skbs_polled)++; +} + +static unsigned int +nfp_nfd3_xsk_rx(struct nfp_net_rx_ring *rx_ring, int budget, + unsigned int *skbs_polled) +{ + struct nfp_net_r_vector *r_vec = rx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + struct nfp_net_tx_ring *tx_ring; + struct bpf_prog *xdp_prog; + bool xdp_redir = false; + int pkts_polled = 0; + + xdp_prog = READ_ONCE(dp->xdp_prog); + tx_ring = r_vec->xdp_ring; + + while (pkts_polled < budget) { + unsigned int meta_len, data_len, pkt_len, pkt_off; + struct nfp_net_xsk_rx_buf *xrxbuf; + struct nfp_net_rx_desc *rxd; + struct nfp_meta_parsed meta; + int idx, act; + + idx = D_IDX(rx_ring, rx_ring->rd_p); + + rxd = &rx_ring->rxds[idx]; + if (!(rxd->rxd.meta_len_dd & PCIE_DESC_RX_DD)) + break; + + rx_ring->rd_p++; + pkts_polled++; + + xrxbuf = &rx_ring->xsk_rxbufs[idx]; + + /* If starved of buffers "drop" it and scream. */ + if (rx_ring->rd_p >= rx_ring->wr_p) { + nn_dp_warn(dp, "Starved of RX buffers\n"); + nfp_net_xsk_rx_drop(r_vec, xrxbuf); + break; + } + + /* Memory barrier to ensure that we won't do other reads + * before the DD bit. + */ + dma_rmb(); + + memset(&meta, 0, sizeof(meta)); + + /* Only supporting AF_XDP with dynamic metadata so buffer layout + * is always: + * + * --------------------------------------------------------- + * | off | metadata | packet | XXXX | + * --------------------------------------------------------- + */ + meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK; + data_len = le16_to_cpu(rxd->rxd.data_len); + pkt_len = data_len - meta_len; + + if (unlikely(meta_len > NFP_NET_MAX_PREPEND)) { + nn_dp_warn(dp, "Oversized RX packet metadata %u\n", + meta_len); + nfp_net_xsk_rx_drop(r_vec, xrxbuf); + continue; + } + + /* Stats update. */ + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->rx_pkts++; + r_vec->rx_bytes += pkt_len; + u64_stats_update_end(&r_vec->rx_sync); + + xrxbuf->xdp->data += meta_len; + xrxbuf->xdp->data_end = xrxbuf->xdp->data + pkt_len; + xdp_set_data_meta_invalid(xrxbuf->xdp); + xsk_buff_dma_sync_for_cpu(xrxbuf->xdp, r_vec->xsk_pool); + net_prefetch(xrxbuf->xdp->data); + + if (meta_len) { + if (unlikely(nfp_nfd3_parse_meta(dp->netdev, &meta, + xrxbuf->xdp->data - + meta_len, + xrxbuf->xdp->data, + pkt_len, meta_len))) { + nn_dp_warn(dp, "Invalid RX packet metadata\n"); + nfp_net_xsk_rx_drop(r_vec, xrxbuf); + continue; + } + + if (unlikely(meta.portid)) { + struct nfp_net *nn = netdev_priv(dp->netdev); + + if (meta.portid != NFP_META_PORT_ID_CTRL) { + nfp_nfd3_xsk_rx_skb(rx_ring, rxd, + xrxbuf, &meta, + pkt_len, false, + skbs_polled); + continue; + } + + nfp_app_ctrl_rx_raw(nn->app, xrxbuf->xdp->data, + pkt_len); + nfp_net_xsk_rx_free(xrxbuf); + continue; + } + } + + act = bpf_prog_run_xdp(xdp_prog, xrxbuf->xdp); + + pkt_len = xrxbuf->xdp->data_end - xrxbuf->xdp->data; + pkt_off = xrxbuf->xdp->data - xrxbuf->xdp->data_hard_start; + + switch (act) { + case XDP_PASS: + nfp_nfd3_xsk_rx_skb(rx_ring, rxd, xrxbuf, &meta, pkt_len, + true, skbs_polled); + break; + case XDP_TX: + if (!nfp_nfd3_xsk_tx_xdp(dp, r_vec, rx_ring, tx_ring, + xrxbuf, pkt_len, pkt_off)) + nfp_net_xsk_rx_drop(r_vec, xrxbuf); + else + nfp_net_xsk_rx_unstash(xrxbuf); + break; + case XDP_REDIRECT: + if (xdp_do_redirect(dp->netdev, xrxbuf->xdp, xdp_prog)) { + nfp_net_xsk_rx_drop(r_vec, xrxbuf); + } else { + nfp_net_xsk_rx_unstash(xrxbuf); + xdp_redir = true; + } + break; + default: + bpf_warn_invalid_xdp_action(dp->netdev, xdp_prog, act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(dp->netdev, xdp_prog, act); + fallthrough; + case XDP_DROP: + nfp_net_xsk_rx_drop(r_vec, xrxbuf); + break; + } + } + + nfp_net_xsk_rx_ring_fill_freelist(r_vec->rx_ring); + + if (xdp_redir) + xdp_do_flush_map(); + + if (tx_ring->wr_ptr_add) + nfp_net_tx_xmit_more_flush(tx_ring); + + return pkts_polled; +} + +void nfp_nfd3_xsk_tx_free(struct nfp_nfd3_tx_buf *txbuf) +{ + xsk_buff_free(txbuf->xdp); + + txbuf->dma_addr = 0; + txbuf->xdp = NULL; +} + +static bool nfp_nfd3_xsk_complete(struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + u32 done_pkts = 0, done_bytes = 0, reused = 0; + bool done_all; + int idx, todo; + u32 qcp_rd_p; + + if (tx_ring->wr_p == tx_ring->rd_p) + return true; + + /* Work out how many descriptors have been transmitted. */ + qcp_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); + + if (qcp_rd_p == tx_ring->qcp_rd_p) + return true; + + todo = D_IDX(tx_ring, qcp_rd_p - tx_ring->qcp_rd_p); + + done_all = todo <= NFP_NET_XDP_MAX_COMPLETE; + todo = min(todo, NFP_NET_XDP_MAX_COMPLETE); + + tx_ring->qcp_rd_p = D_IDX(tx_ring, tx_ring->qcp_rd_p + todo); + + done_pkts = todo; + while (todo--) { + struct nfp_nfd3_tx_buf *txbuf; + + idx = D_IDX(tx_ring, tx_ring->rd_p); + tx_ring->rd_p++; + + txbuf = &tx_ring->txbufs[idx]; + if (unlikely(!txbuf->real_len)) + continue; + + done_bytes += txbuf->real_len; + txbuf->real_len = 0; + + if (txbuf->is_xsk_tx) { + nfp_nfd3_xsk_tx_free(txbuf); + reused++; + } + } + + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_bytes += done_bytes; + r_vec->tx_pkts += done_pkts; + u64_stats_update_end(&r_vec->tx_sync); + + xsk_tx_completed(r_vec->xsk_pool, done_pkts - reused); + + WARN_ONCE(tx_ring->wr_p - tx_ring->rd_p > tx_ring->cnt, + "XDP TX ring corruption rd_p=%u wr_p=%u cnt=%u\n", + tx_ring->rd_p, tx_ring->wr_p, tx_ring->cnt); + + return done_all; +} + +static void nfp_nfd3_xsk_tx(struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + struct xdp_desc desc[NFP_NET_XSK_TX_BATCH]; + struct xsk_buff_pool *xsk_pool; + struct nfp_nfd3_tx_desc *txd; + u32 pkts = 0, wr_idx; + u32 i, got; + + xsk_pool = r_vec->xsk_pool; + + while (nfp_net_tx_space(tx_ring) >= NFP_NET_XSK_TX_BATCH) { + for (i = 0; i < NFP_NET_XSK_TX_BATCH; i++) + if (!xsk_tx_peek_desc(xsk_pool, &desc[i])) + break; + got = i; + if (!got) + break; + + wr_idx = D_IDX(tx_ring, tx_ring->wr_p + i); + prefetchw(&tx_ring->txds[wr_idx]); + + for (i = 0; i < got; i++) + xsk_buff_raw_dma_sync_for_device(xsk_pool, desc[i].addr, + desc[i].len); + + for (i = 0; i < got; i++) { + wr_idx = D_IDX(tx_ring, tx_ring->wr_p + i); + + tx_ring->txbufs[wr_idx].real_len = desc[i].len; + tx_ring->txbufs[wr_idx].is_xsk_tx = false; + + /* Build TX descriptor. */ + txd = &tx_ring->txds[wr_idx]; + nfp_desc_set_dma_addr(txd, + xsk_buff_raw_get_dma(xsk_pool, + desc[i].addr + )); + txd->offset_eop = NFD3_DESC_TX_EOP; + txd->dma_len = cpu_to_le16(desc[i].len); + txd->data_len = cpu_to_le16(desc[i].len); + } + + tx_ring->wr_p += got; + pkts += got; + } + + if (!pkts) + return; + + xsk_tx_release(xsk_pool); + /* Ensure all records are visible before incrementing write counter. */ + wmb(); + nfp_qcp_wr_ptr_add(tx_ring->qcp_q, pkts); +} + +int nfp_nfd3_xsk_poll(struct napi_struct *napi, int budget) +{ + struct nfp_net_r_vector *r_vec = + container_of(napi, struct nfp_net_r_vector, napi); + unsigned int pkts_polled, skbs = 0; + + pkts_polled = nfp_nfd3_xsk_rx(r_vec->rx_ring, budget, &skbs); + + if (pkts_polled < budget) { + if (r_vec->tx_ring) + nfp_nfd3_tx_complete(r_vec->tx_ring, budget); + + if (!nfp_nfd3_xsk_complete(r_vec->xdp_ring)) + pkts_polled = budget; + + nfp_nfd3_xsk_tx(r_vec->xdp_ring); + + if (pkts_polled < budget && napi_complete_done(napi, skbs)) + nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry); + } + + return pkts_polled; +} diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index 49b5fcb49aef..bffe53f4c2d3 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -104,6 +104,9 @@ struct nfp_net_r_vector; struct nfp_port; struct xsk_buff_pool; +struct nfp_nfd3_tx_desc; +struct nfp_nfd3_tx_buf; + /* Convenience macro for wrapping descriptor index on ring size */ #define D_IDX(ring, idx) ((idx) & ((ring)->cnt - 1)) @@ -117,83 +120,6 @@ struct xsk_buff_pool; __d->dma_addr_hi = upper_32_bits(__addr) & 0xff; \ } while (0) -/* TX descriptor format */ - -#define PCIE_DESC_TX_EOP BIT(7) -#define PCIE_DESC_TX_OFFSET_MASK GENMASK(6, 0) -#define PCIE_DESC_TX_MSS_MASK GENMASK(13, 0) - -/* Flags in the host TX descriptor */ -#define PCIE_DESC_TX_CSUM BIT(7) -#define PCIE_DESC_TX_IP4_CSUM BIT(6) -#define PCIE_DESC_TX_TCP_CSUM BIT(5) -#define PCIE_DESC_TX_UDP_CSUM BIT(4) -#define PCIE_DESC_TX_VLAN BIT(3) -#define PCIE_DESC_TX_LSO BIT(2) -#define PCIE_DESC_TX_ENCAP BIT(1) -#define PCIE_DESC_TX_O_IP4_CSUM BIT(0) - -struct nfp_net_tx_desc { - union { - struct { - u8 dma_addr_hi; /* High bits of host buf address */ - __le16 dma_len; /* Length to DMA for this desc */ - u8 offset_eop; /* Offset in buf where pkt starts + - * highest bit is eop flag. - */ - __le32 dma_addr_lo; /* Low 32bit of host buf addr */ - - __le16 mss; /* MSS to be used for LSO */ - u8 lso_hdrlen; /* LSO, TCP payload offset */ - u8 flags; /* TX Flags, see @PCIE_DESC_TX_* */ - union { - struct { - u8 l3_offset; /* L3 header offset */ - u8 l4_offset; /* L4 header offset */ - }; - __le16 vlan; /* VLAN tag to add if indicated */ - }; - __le16 data_len; /* Length of frame + meta data */ - } __packed; - __le32 vals[4]; - __le64 vals8[2]; - }; -}; - -/** - * struct nfp_net_tx_buf - software TX buffer descriptor - * @skb: normal ring, sk_buff associated with this buffer - * @frag: XDP ring, page frag associated with this buffer - * @xdp: XSK buffer pool handle (for AF_XDP) - * @dma_addr: DMA mapping address of the buffer - * @fidx: Fragment index (-1 for the head and [0..nr_frags-1] for frags) - * @pkt_cnt: Number of packets to be produced out of the skb associated - * with this buffer (valid only on the head's buffer). - * Will be 1 for all non-TSO packets. - * @is_xsk_tx: Flag if buffer is a RX buffer after a XDP_TX action and not a - * buffer from the TX queue (for AF_XDP). - * @real_len: Number of bytes which to be produced out of the skb (valid only - * on the head's buffer). Equal to skb->len for non-TSO packets. - */ -struct nfp_net_tx_buf { - union { - struct sk_buff *skb; - void *frag; - struct xdp_buff *xdp; - }; - dma_addr_t dma_addr; - union { - struct { - short int fidx; - u16 pkt_cnt; - }; - struct { - bool is_xsk_tx; - }; - }; - u32 real_len; -}; - /** * struct nfp_net_tx_ring - TX ring structure * @r_vec: Back pointer to ring vector structure @@ -226,8 +152,8 @@ struct nfp_net_tx_ring { u32 wr_ptr_add; - struct nfp_net_tx_buf *txbufs; - struct nfp_net_tx_desc *txds; + struct nfp_nfd3_tx_buf *txbufs; + struct nfp_nfd3_tx_desc *txds; dma_addr_t dma; size_t size; @@ -960,7 +886,6 @@ int nfp_net_mbox_reconfig_and_unlock(struct nfp_net *nn, u32 mbox_cmd); void nfp_net_mbox_reconfig_post(struct nfp_net *nn, u32 update); int nfp_net_mbox_reconfig_wait_posted(struct nfp_net *nn); -void nfp_net_irq_unmask(struct nfp_net *nn, unsigned int entry_nr); unsigned int nfp_net_irqs_alloc(struct pci_dev *pdev, struct msix_entry *irq_entries, unsigned int min_irqs, unsigned int want_irqs); @@ -968,19 +893,10 @@ void nfp_net_irqs_disable(struct pci_dev *pdev); void nfp_net_irqs_assign(struct nfp_net *nn, struct msix_entry *irq_entries, unsigned int n); - -void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring); -void nfp_net_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget); - -bool -nfp_net_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta, - void *data, void *pkt, unsigned int pkt_len, int meta_len); - -void nfp_net_rx_csum(const struct nfp_net_dp *dp, - struct nfp_net_r_vector *r_vec, - const struct nfp_net_rx_desc *rxd, - const struct nfp_meta_parsed *meta, - struct sk_buff *skb); +struct sk_buff * +nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + struct sk_buff *skb, u64 *tls_handle, int *nr_frags); +void nfp_net_tls_tx_undo(struct sk_buff *skb, u64 tls_handle); struct nfp_net_dp *nfp_net_clone_dp(struct nfp_net *nn); int nfp_net_ring_reconfig(struct nfp_net *nn, struct nfp_net_dp *new, diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 50f7ada0dedd..1d3277068301 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -1,5 +1,5 @@ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) -/* Copyright (C) 2015-2018 Netronome Systems, Inc. */ +/* Copyright (C) 2015-2019 Netronome Systems, Inc. */ /* * nfp_net_common.c @@ -13,7 +13,6 @@ #include #include -#include #include #include #include @@ -46,6 +45,7 @@ #include "nfp_app.h" #include "nfp_net_ctrl.h" #include "nfp_net.h" +#include "nfp_net_dp.h" #include "nfp_net_sriov.h" #include "nfp_net_xsk.h" #include "nfp_port.h" @@ -72,35 +72,6 @@ u32 nfp_qcp_queue_offset(const struct nfp_dev_info *dev_info, u16 queue) return dev_info->qc_addr_offset + NFP_QCP_QUEUE_ADDR_SZ * queue; } -static dma_addr_t nfp_net_dma_map_rx(struct nfp_net_dp *dp, void *frag) -{ - return dma_map_single_attrs(dp->dev, frag + NFP_NET_RX_BUF_HEADROOM, - dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, - dp->rx_dma_dir, DMA_ATTR_SKIP_CPU_SYNC); -} - -static void -nfp_net_dma_sync_dev_rx(const struct nfp_net_dp *dp, dma_addr_t dma_addr) -{ - dma_sync_single_for_device(dp->dev, dma_addr, - dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, - dp->rx_dma_dir); -} - -static void nfp_net_dma_unmap_rx(struct nfp_net_dp *dp, dma_addr_t dma_addr) -{ - dma_unmap_single_attrs(dp->dev, dma_addr, - dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, - dp->rx_dma_dir, DMA_ATTR_SKIP_CPU_SYNC); -} - -static void nfp_net_dma_sync_cpu_rx(struct nfp_net_dp *dp, dma_addr_t dma_addr, - unsigned int len) -{ - dma_sync_single_for_cpu(dp->dev, dma_addr - NFP_NET_RX_BUF_HEADROOM, - len, dp->rx_dma_dir); -} - /* Firmware reconfig * * Firmware reconfig may take a while so we have two versions of it - @@ -383,19 +354,6 @@ int nfp_net_mbox_reconfig_and_unlock(struct nfp_net *nn, u32 mbox_cmd) /* Interrupt configuration and handling */ -/** - * nfp_net_irq_unmask() - Unmask automasked interrupt - * @nn: NFP Network structure - * @entry_nr: MSI-X table entry - * - * Clear the ICR for the IRQ entry. - */ -void nfp_net_irq_unmask(struct nfp_net *nn, unsigned int entry_nr) -{ - nn_writeb(nn, NFP_NET_CFG_ICR(entry_nr), NFP_NET_CFG_ICR_UNMASKED); - nn_pci_flush(nn); -} - /** * nfp_net_irqs_alloc() - allocates MSI-X irqs * @pdev: PCI device structure @@ -577,49 +535,6 @@ static irqreturn_t nfp_net_irq_exn(int irq, void *data) return IRQ_HANDLED; } -/** - * nfp_net_tx_ring_init() - Fill in the boilerplate for a TX ring - * @tx_ring: TX ring structure - * @r_vec: IRQ vector servicing this ring - * @idx: Ring index - * @is_xdp: Is this an XDP TX ring? - */ -static void -nfp_net_tx_ring_init(struct nfp_net_tx_ring *tx_ring, - struct nfp_net_r_vector *r_vec, unsigned int idx, - bool is_xdp) -{ - struct nfp_net *nn = r_vec->nfp_net; - - tx_ring->idx = idx; - tx_ring->r_vec = r_vec; - tx_ring->is_xdp = is_xdp; - u64_stats_init(&tx_ring->r_vec->tx_sync); - - tx_ring->qcidx = tx_ring->idx * nn->stride_tx; - tx_ring->qcp_q = nn->tx_bar + NFP_QCP_QUEUE_OFF(tx_ring->qcidx); -} - -/** - * nfp_net_rx_ring_init() - Fill in the boilerplate for a RX ring - * @rx_ring: RX ring structure - * @r_vec: IRQ vector servicing this ring - * @idx: Ring index - */ -static void -nfp_net_rx_ring_init(struct nfp_net_rx_ring *rx_ring, - struct nfp_net_r_vector *r_vec, unsigned int idx) -{ - struct nfp_net *nn = r_vec->nfp_net; - - rx_ring->idx = idx; - rx_ring->r_vec = r_vec; - u64_stats_init(&rx_ring->r_vec->rx_sync); - - rx_ring->fl_qcidx = rx_ring->idx * nn->stride_rx; - rx_ring->qcp_fl = nn->rx_bar + NFP_QCP_QUEUE_OFF(rx_ring->fl_qcidx); -} - /** * nfp_net_aux_irq_request() - Request an auxiliary interrupt (LSC or EXN) * @nn: NFP Network structure @@ -667,178 +582,7 @@ static void nfp_net_aux_irq_free(struct nfp_net *nn, u32 ctrl_offset, free_irq(nn->irq_entries[vector_idx].vector, nn); } -/* Transmit - * - * One queue controller peripheral queue is used for transmit. The - * driver en-queues packets for transmit by advancing the write - * pointer. The device indicates that packets have transmitted by - * advancing the read pointer. The driver maintains a local copy of - * the read and write pointer in @struct nfp_net_tx_ring. The driver - * keeps @wr_p in sync with the queue controller write pointer and can - * determine how many packets have been transmitted by comparing its - * copy of the read pointer @rd_p with the read pointer maintained by - * the queue controller peripheral. - */ - -/** - * nfp_net_tx_full() - Check if the TX ring is full - * @tx_ring: TX ring to check - * @dcnt: Number of descriptors that need to be enqueued (must be >= 1) - * - * This function checks, based on the *host copy* of read/write - * pointer if a given TX ring is full. The real TX queue may have - * some newly made available slots. - * - * Return: True if the ring is full. - */ -static int nfp_net_tx_full(struct nfp_net_tx_ring *tx_ring, int dcnt) -{ - return (tx_ring->wr_p - tx_ring->rd_p) >= (tx_ring->cnt - dcnt); -} - -/* Wrappers for deciding when to stop and restart TX queues */ -static int nfp_net_tx_ring_should_wake(struct nfp_net_tx_ring *tx_ring) -{ - return !nfp_net_tx_full(tx_ring, MAX_SKB_FRAGS * 4); -} - -static int nfp_net_tx_ring_should_stop(struct nfp_net_tx_ring *tx_ring) -{ - return nfp_net_tx_full(tx_ring, MAX_SKB_FRAGS + 1); -} - -/** - * nfp_net_tx_ring_stop() - stop tx ring - * @nd_q: netdev queue - * @tx_ring: driver tx queue structure - * - * Safely stop TX ring. Remember that while we are running .start_xmit() - * someone else may be cleaning the TX ring completions so we need to be - * extra careful here. - */ -static void nfp_net_tx_ring_stop(struct netdev_queue *nd_q, - struct nfp_net_tx_ring *tx_ring) -{ - netif_tx_stop_queue(nd_q); - - /* We can race with the TX completion out of NAPI so recheck */ - smp_mb(); - if (unlikely(nfp_net_tx_ring_should_wake(tx_ring))) - netif_tx_start_queue(nd_q); -} - -/** - * nfp_net_tx_tso() - Set up Tx descriptor for LSO - * @r_vec: per-ring structure - * @txbuf: Pointer to driver soft TX descriptor - * @txd: Pointer to HW TX descriptor - * @skb: Pointer to SKB - * @md_bytes: Prepend length - * - * Set up Tx descriptor for LSO, do nothing for non-LSO skbs. - * Return error on packet header greater than maximum supported LSO header size. - */ -static void nfp_net_tx_tso(struct nfp_net_r_vector *r_vec, - struct nfp_net_tx_buf *txbuf, - struct nfp_net_tx_desc *txd, struct sk_buff *skb, - u32 md_bytes) -{ - u32 l3_offset, l4_offset, hdrlen; - u16 mss; - - if (!skb_is_gso(skb)) - return; - - if (!skb->encapsulation) { - l3_offset = skb_network_offset(skb); - l4_offset = skb_transport_offset(skb); - hdrlen = skb_transport_offset(skb) + tcp_hdrlen(skb); - } else { - l3_offset = skb_inner_network_offset(skb); - l4_offset = skb_inner_transport_offset(skb); - hdrlen = skb_inner_transport_header(skb) - skb->data + - inner_tcp_hdrlen(skb); - } - - txbuf->pkt_cnt = skb_shinfo(skb)->gso_segs; - txbuf->real_len += hdrlen * (txbuf->pkt_cnt - 1); - - mss = skb_shinfo(skb)->gso_size & PCIE_DESC_TX_MSS_MASK; - txd->l3_offset = l3_offset - md_bytes; - txd->l4_offset = l4_offset - md_bytes; - txd->lso_hdrlen = hdrlen - md_bytes; - txd->mss = cpu_to_le16(mss); - txd->flags |= PCIE_DESC_TX_LSO; - - u64_stats_update_begin(&r_vec->tx_sync); - r_vec->tx_lso++; - u64_stats_update_end(&r_vec->tx_sync); -} - -/** - * nfp_net_tx_csum() - Set TX CSUM offload flags in TX descriptor - * @dp: NFP Net data path struct - * @r_vec: per-ring structure - * @txbuf: Pointer to driver soft TX descriptor - * @txd: Pointer to TX descriptor - * @skb: Pointer to SKB - * - * This function sets the TX checksum flags in the TX descriptor based - * on the configuration and the protocol of the packet to be transmitted. - */ -static void nfp_net_tx_csum(struct nfp_net_dp *dp, - struct nfp_net_r_vector *r_vec, - struct nfp_net_tx_buf *txbuf, - struct nfp_net_tx_desc *txd, struct sk_buff *skb) -{ - struct ipv6hdr *ipv6h; - struct iphdr *iph; - u8 l4_hdr; - - if (!(dp->ctrl & NFP_NET_CFG_CTRL_TXCSUM)) - return; - - if (skb->ip_summed != CHECKSUM_PARTIAL) - return; - - txd->flags |= PCIE_DESC_TX_CSUM; - if (skb->encapsulation) - txd->flags |= PCIE_DESC_TX_ENCAP; - - iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb); - ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb) : ipv6_hdr(skb); - - if (iph->version == 4) { - txd->flags |= PCIE_DESC_TX_IP4_CSUM; - l4_hdr = iph->protocol; - } else if (ipv6h->version == 6) { - l4_hdr = ipv6h->nexthdr; - } else { - nn_dp_warn(dp, "partial checksum but ipv=%x!\n", iph->version); - return; - } - - switch (l4_hdr) { - case IPPROTO_TCP: - txd->flags |= PCIE_DESC_TX_TCP_CSUM; - break; - case IPPROTO_UDP: - txd->flags |= PCIE_DESC_TX_UDP_CSUM; - break; - default: - nn_dp_warn(dp, "partial checksum but l4 proto=%x!\n", l4_hdr); - return; - } - - u64_stats_update_begin(&r_vec->tx_sync); - if (skb->encapsulation) - r_vec->hw_csum_tx_inner += txbuf->pkt_cnt; - else - r_vec->hw_csum_tx += txbuf->pkt_cnt; - u64_stats_update_end(&r_vec->tx_sync); -} - -static struct sk_buff * +struct sk_buff * nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, struct sk_buff *skb, u64 *tls_handle, int *nr_frags) { @@ -910,7 +654,7 @@ nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, return skb; } -static void nfp_net_tls_tx_undo(struct sk_buff *skb, u64 tls_handle) +void nfp_net_tls_tx_undo(struct sk_buff *skb, u64 tls_handle) { #ifdef CONFIG_TLS_DEVICE struct nfp_net_tls_offload_ctx *ntls; @@ -932,414 +676,6 @@ static void nfp_net_tls_tx_undo(struct sk_buff *skb, u64 tls_handle) #endif } -void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring) -{ - wmb(); - nfp_qcp_wr_ptr_add(tx_ring->qcp_q, tx_ring->wr_ptr_add); - tx_ring->wr_ptr_add = 0; -} - -static int nfp_net_prep_tx_meta(struct sk_buff *skb, u64 tls_handle) -{ - struct metadata_dst *md_dst = skb_metadata_dst(skb); - unsigned char *data; - u32 meta_id = 0; - int md_bytes; - - if (likely(!md_dst && !tls_handle)) - return 0; - if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX)) { - if (!tls_handle) - return 0; - md_dst = NULL; - } - - md_bytes = 4 + !!md_dst * 4 + !!tls_handle * 8; - - if (unlikely(skb_cow_head(skb, md_bytes))) - return -ENOMEM; - - meta_id = 0; - data = skb_push(skb, md_bytes) + md_bytes; - if (md_dst) { - data -= 4; - put_unaligned_be32(md_dst->u.port_info.port_id, data); - meta_id = NFP_NET_META_PORTID; - } - if (tls_handle) { - /* conn handle is opaque, we just use u64 to be able to quickly - * compare it to zero - */ - data -= 8; - memcpy(data, &tls_handle, sizeof(tls_handle)); - meta_id <<= NFP_NET_META_FIELD_SIZE; - meta_id |= NFP_NET_META_CONN_HANDLE; - } - - data -= 4; - put_unaligned_be32(meta_id, data); - - return md_bytes; -} - -/** - * nfp_net_tx() - Main transmit entry point - * @skb: SKB to transmit - * @netdev: netdev structure - * - * Return: NETDEV_TX_OK on success. - */ -static netdev_tx_t nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) -{ - struct nfp_net *nn = netdev_priv(netdev); - const skb_frag_t *frag; - int f, nr_frags, wr_idx, md_bytes; - struct nfp_net_tx_ring *tx_ring; - struct nfp_net_r_vector *r_vec; - struct nfp_net_tx_buf *txbuf; - struct nfp_net_tx_desc *txd; - struct netdev_queue *nd_q; - struct nfp_net_dp *dp; - dma_addr_t dma_addr; - unsigned int fsize; - u64 tls_handle = 0; - u16 qidx; - - dp = &nn->dp; - qidx = skb_get_queue_mapping(skb); - tx_ring = &dp->tx_rings[qidx]; - r_vec = tx_ring->r_vec; - - nr_frags = skb_shinfo(skb)->nr_frags; - - if (unlikely(nfp_net_tx_full(tx_ring, nr_frags + 1))) { - nn_dp_warn(dp, "TX ring %d busy. wrp=%u rdp=%u\n", - qidx, tx_ring->wr_p, tx_ring->rd_p); - nd_q = netdev_get_tx_queue(dp->netdev, qidx); - netif_tx_stop_queue(nd_q); - nfp_net_tx_xmit_more_flush(tx_ring); - u64_stats_update_begin(&r_vec->tx_sync); - r_vec->tx_busy++; - u64_stats_update_end(&r_vec->tx_sync); - return NETDEV_TX_BUSY; - } - - skb = nfp_net_tls_tx(dp, r_vec, skb, &tls_handle, &nr_frags); - if (unlikely(!skb)) { - nfp_net_tx_xmit_more_flush(tx_ring); - return NETDEV_TX_OK; - } - - md_bytes = nfp_net_prep_tx_meta(skb, tls_handle); - if (unlikely(md_bytes < 0)) - goto err_flush; - - /* Start with the head skbuf */ - dma_addr = dma_map_single(dp->dev, skb->data, skb_headlen(skb), - DMA_TO_DEVICE); - if (dma_mapping_error(dp->dev, dma_addr)) - goto err_dma_err; - - wr_idx = D_IDX(tx_ring, tx_ring->wr_p); - - /* Stash the soft descriptor of the head then initialize it */ - txbuf = &tx_ring->txbufs[wr_idx]; - txbuf->skb = skb; - txbuf->dma_addr = dma_addr; - txbuf->fidx = -1; - txbuf->pkt_cnt = 1; - txbuf->real_len = skb->len; - - /* Build TX descriptor */ - txd = &tx_ring->txds[wr_idx]; - txd->offset_eop = (nr_frags ? 0 : PCIE_DESC_TX_EOP) | md_bytes; - txd->dma_len = cpu_to_le16(skb_headlen(skb)); - nfp_desc_set_dma_addr(txd, dma_addr); - txd->data_len = cpu_to_le16(skb->len); - - txd->flags = 0; - txd->mss = 0; - txd->lso_hdrlen = 0; - - /* Do not reorder - tso may adjust pkt cnt, vlan may override fields */ - nfp_net_tx_tso(r_vec, txbuf, txd, skb, md_bytes); - nfp_net_tx_csum(dp, r_vec, txbuf, txd, skb); - if (skb_vlan_tag_present(skb) && dp->ctrl & NFP_NET_CFG_CTRL_TXVLAN) { - txd->flags |= PCIE_DESC_TX_VLAN; - txd->vlan = cpu_to_le16(skb_vlan_tag_get(skb)); - } - - /* Gather DMA */ - if (nr_frags > 0) { - __le64 second_half; - - /* all descs must match except for in addr, length and eop */ - second_half = txd->vals8[1]; - - for (f = 0; f < nr_frags; f++) { - frag = &skb_shinfo(skb)->frags[f]; - fsize = skb_frag_size(frag); - - dma_addr = skb_frag_dma_map(dp->dev, frag, 0, - fsize, DMA_TO_DEVICE); - if (dma_mapping_error(dp->dev, dma_addr)) - goto err_unmap; - - wr_idx = D_IDX(tx_ring, wr_idx + 1); - tx_ring->txbufs[wr_idx].skb = skb; - tx_ring->txbufs[wr_idx].dma_addr = dma_addr; - tx_ring->txbufs[wr_idx].fidx = f; - - txd = &tx_ring->txds[wr_idx]; - txd->dma_len = cpu_to_le16(fsize); - nfp_desc_set_dma_addr(txd, dma_addr); - txd->offset_eop = md_bytes | - ((f == nr_frags - 1) ? PCIE_DESC_TX_EOP : 0); - txd->vals8[1] = second_half; - } - - u64_stats_update_begin(&r_vec->tx_sync); - r_vec->tx_gather++; - u64_stats_update_end(&r_vec->tx_sync); - } - - skb_tx_timestamp(skb); - - nd_q = netdev_get_tx_queue(dp->netdev, tx_ring->idx); - - tx_ring->wr_p += nr_frags + 1; - if (nfp_net_tx_ring_should_stop(tx_ring)) - nfp_net_tx_ring_stop(nd_q, tx_ring); - - tx_ring->wr_ptr_add += nr_frags + 1; - if (__netdev_tx_sent_queue(nd_q, txbuf->real_len, netdev_xmit_more())) - nfp_net_tx_xmit_more_flush(tx_ring); - - return NETDEV_TX_OK; - -err_unmap: - while (--f >= 0) { - frag = &skb_shinfo(skb)->frags[f]; - dma_unmap_page(dp->dev, tx_ring->txbufs[wr_idx].dma_addr, - skb_frag_size(frag), DMA_TO_DEVICE); - tx_ring->txbufs[wr_idx].skb = NULL; - tx_ring->txbufs[wr_idx].dma_addr = 0; - tx_ring->txbufs[wr_idx].fidx = -2; - wr_idx = wr_idx - 1; - if (wr_idx < 0) - wr_idx += tx_ring->cnt; - } - dma_unmap_single(dp->dev, tx_ring->txbufs[wr_idx].dma_addr, - skb_headlen(skb), DMA_TO_DEVICE); - tx_ring->txbufs[wr_idx].skb = NULL; - tx_ring->txbufs[wr_idx].dma_addr = 0; - tx_ring->txbufs[wr_idx].fidx = -2; -err_dma_err: - nn_dp_warn(dp, "Failed to map DMA TX buffer\n"); -err_flush: - nfp_net_tx_xmit_more_flush(tx_ring); - u64_stats_update_begin(&r_vec->tx_sync); - r_vec->tx_errors++; - u64_stats_update_end(&r_vec->tx_sync); - nfp_net_tls_tx_undo(skb, tls_handle); - dev_kfree_skb_any(skb); - return NETDEV_TX_OK; -} - -/** - * nfp_net_tx_complete() - Handled completed TX packets - * @tx_ring: TX ring structure - * @budget: NAPI budget (only used as bool to determine if in NAPI context) - */ -void nfp_net_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget) -{ - struct nfp_net_r_vector *r_vec = tx_ring->r_vec; - struct nfp_net_dp *dp = &r_vec->nfp_net->dp; - struct netdev_queue *nd_q; - u32 done_pkts = 0, done_bytes = 0; - u32 qcp_rd_p; - int todo; - - if (tx_ring->wr_p == tx_ring->rd_p) - return; - - /* Work out how many descriptors have been transmitted */ - qcp_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); - - if (qcp_rd_p == tx_ring->qcp_rd_p) - return; - - todo = D_IDX(tx_ring, qcp_rd_p - tx_ring->qcp_rd_p); - - while (todo--) { - const skb_frag_t *frag; - struct nfp_net_tx_buf *tx_buf; - struct sk_buff *skb; - int fidx, nr_frags; - int idx; - - idx = D_IDX(tx_ring, tx_ring->rd_p++); - tx_buf = &tx_ring->txbufs[idx]; - - skb = tx_buf->skb; - if (!skb) - continue; - - nr_frags = skb_shinfo(skb)->nr_frags; - fidx = tx_buf->fidx; - - if (fidx == -1) { - /* unmap head */ - dma_unmap_single(dp->dev, tx_buf->dma_addr, - skb_headlen(skb), DMA_TO_DEVICE); - - done_pkts += tx_buf->pkt_cnt; - done_bytes += tx_buf->real_len; - } else { - /* unmap fragment */ - frag = &skb_shinfo(skb)->frags[fidx]; - dma_unmap_page(dp->dev, tx_buf->dma_addr, - skb_frag_size(frag), DMA_TO_DEVICE); - } - - /* check for last gather fragment */ - if (fidx == nr_frags - 1) - napi_consume_skb(skb, budget); - - tx_buf->dma_addr = 0; - tx_buf->skb = NULL; - tx_buf->fidx = -2; - } - - tx_ring->qcp_rd_p = qcp_rd_p; - - u64_stats_update_begin(&r_vec->tx_sync); - r_vec->tx_bytes += done_bytes; - r_vec->tx_pkts += done_pkts; - u64_stats_update_end(&r_vec->tx_sync); - - if (!dp->netdev) - return; - - nd_q = netdev_get_tx_queue(dp->netdev, tx_ring->idx); - netdev_tx_completed_queue(nd_q, done_pkts, done_bytes); - if (nfp_net_tx_ring_should_wake(tx_ring)) { - /* Make sure TX thread will see updated tx_ring->rd_p */ - smp_mb(); - - if (unlikely(netif_tx_queue_stopped(nd_q))) - netif_tx_wake_queue(nd_q); - } - - WARN_ONCE(tx_ring->wr_p - tx_ring->rd_p > tx_ring->cnt, - "TX ring corruption rd_p=%u wr_p=%u cnt=%u\n", - tx_ring->rd_p, tx_ring->wr_p, tx_ring->cnt); -} - -static bool nfp_net_xdp_complete(struct nfp_net_tx_ring *tx_ring) -{ - struct nfp_net_r_vector *r_vec = tx_ring->r_vec; - u32 done_pkts = 0, done_bytes = 0; - bool done_all; - int idx, todo; - u32 qcp_rd_p; - - /* Work out how many descriptors have been transmitted */ - qcp_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); - - if (qcp_rd_p == tx_ring->qcp_rd_p) - return true; - - todo = D_IDX(tx_ring, qcp_rd_p - tx_ring->qcp_rd_p); - - done_all = todo <= NFP_NET_XDP_MAX_COMPLETE; - todo = min(todo, NFP_NET_XDP_MAX_COMPLETE); - - tx_ring->qcp_rd_p = D_IDX(tx_ring, tx_ring->qcp_rd_p + todo); - - done_pkts = todo; - while (todo--) { - idx = D_IDX(tx_ring, tx_ring->rd_p); - tx_ring->rd_p++; - - done_bytes += tx_ring->txbufs[idx].real_len; - } - - u64_stats_update_begin(&r_vec->tx_sync); - r_vec->tx_bytes += done_bytes; - r_vec->tx_pkts += done_pkts; - u64_stats_update_end(&r_vec->tx_sync); - - WARN_ONCE(tx_ring->wr_p - tx_ring->rd_p > tx_ring->cnt, - "XDP TX ring corruption rd_p=%u wr_p=%u cnt=%u\n", - tx_ring->rd_p, tx_ring->wr_p, tx_ring->cnt); - - return done_all; -} - -/** - * nfp_net_tx_ring_reset() - Free any untransmitted buffers and reset pointers - * @dp: NFP Net data path struct - * @tx_ring: TX ring structure - * - * Assumes that the device is stopped, must be idempotent. - */ -static void -nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) -{ - const skb_frag_t *frag; - struct netdev_queue *nd_q; - - while (!tx_ring->is_xdp && tx_ring->rd_p != tx_ring->wr_p) { - struct nfp_net_tx_buf *tx_buf; - struct sk_buff *skb; - int idx, nr_frags; - - idx = D_IDX(tx_ring, tx_ring->rd_p); - tx_buf = &tx_ring->txbufs[idx]; - - skb = tx_ring->txbufs[idx].skb; - nr_frags = skb_shinfo(skb)->nr_frags; - - if (tx_buf->fidx == -1) { - /* unmap head */ - dma_unmap_single(dp->dev, tx_buf->dma_addr, - skb_headlen(skb), DMA_TO_DEVICE); - } else { - /* unmap fragment */ - frag = &skb_shinfo(skb)->frags[tx_buf->fidx]; - dma_unmap_page(dp->dev, tx_buf->dma_addr, - skb_frag_size(frag), DMA_TO_DEVICE); - } - - /* check for last gather fragment */ - if (tx_buf->fidx == nr_frags - 1) - dev_kfree_skb_any(skb); - - tx_buf->dma_addr = 0; - tx_buf->skb = NULL; - tx_buf->fidx = -2; - - tx_ring->qcp_rd_p++; - tx_ring->rd_p++; - } - - if (tx_ring->is_xdp) - nfp_net_xsk_tx_bufs_free(tx_ring); - - memset(tx_ring->txds, 0, tx_ring->size); - tx_ring->wr_p = 0; - tx_ring->rd_p = 0; - tx_ring->qcp_rd_p = 0; - tx_ring->wr_ptr_add = 0; - - if (tx_ring->is_xdp || !dp->netdev) - return; - - nd_q = netdev_get_tx_queue(dp->netdev, tx_ring->idx); - netdev_tx_reset_queue(nd_q); -} - static void nfp_net_tx_timeout(struct net_device *netdev, unsigned int txqueue) { struct nfp_net *nn = netdev_priv(netdev); @@ -1347,8 +683,7 @@ static void nfp_net_tx_timeout(struct net_device *netdev, unsigned int txqueue) nn_warn(nn, "TX watchdog timeout on ring: %u\n", txqueue); } -/* Receive processing - */ +/* Receive processing */ static unsigned int nfp_net_calc_fl_bufsz_data(struct nfp_net_dp *dp) { @@ -1387,1335 +722,53 @@ static unsigned int nfp_net_calc_fl_bufsz_xsk(struct nfp_net_dp *dp) return fl_bufsz; } -static void -nfp_net_free_frag(void *frag, bool xdp) -{ - if (!xdp) - skb_free_frag(frag); - else - __free_page(virt_to_page(frag)); -} +/* Setup and Configuration + */ /** - * nfp_net_rx_alloc_one() - Allocate and map page frag for RX - * @dp: NFP Net data path struct - * @dma_addr: Pointer to storage for DMA address (output param) - * - * This function will allcate a new page frag, map it for DMA. - * - * Return: allocated page frag or NULL on failure. + * nfp_net_vecs_init() - Assign IRQs and setup rvecs. + * @nn: NFP Network structure */ -static void *nfp_net_rx_alloc_one(struct nfp_net_dp *dp, dma_addr_t *dma_addr) +static void nfp_net_vecs_init(struct nfp_net *nn) { - void *frag; + struct nfp_net_r_vector *r_vec; + int r; - if (!dp->xdp_prog) { - frag = netdev_alloc_frag(dp->fl_bufsz); - } else { - struct page *page; + nn->lsc_handler = nfp_net_irq_lsc; + nn->exn_handler = nfp_net_irq_exn; - page = alloc_page(GFP_KERNEL); - frag = page ? page_address(page) : NULL; - } - if (!frag) { - nn_dp_warn(dp, "Failed to alloc receive page frag\n"); - return NULL; - } + for (r = 0; r < nn->max_r_vecs; r++) { + struct msix_entry *entry; - *dma_addr = nfp_net_dma_map_rx(dp, frag); - if (dma_mapping_error(dp->dev, *dma_addr)) { - nfp_net_free_frag(frag, dp->xdp_prog); - nn_dp_warn(dp, "Failed to map DMA RX buffer\n"); - return NULL; - } + entry = &nn->irq_entries[NFP_NET_NON_Q_VECTORS + r]; - return frag; -} + r_vec = &nn->r_vecs[r]; + r_vec->nfp_net = nn; + r_vec->irq_entry = entry->entry; + r_vec->irq_vector = entry->vector; -static void *nfp_net_napi_alloc_one(struct nfp_net_dp *dp, dma_addr_t *dma_addr) -{ - void *frag; + if (nn->dp.netdev) { + r_vec->handler = nfp_net_irq_rxtx; + } else { + r_vec->handler = nfp_ctrl_irq_rxtx; - if (!dp->xdp_prog) { - frag = napi_alloc_frag(dp->fl_bufsz); - if (unlikely(!frag)) - return NULL; - } else { - struct page *page; - - page = dev_alloc_page(); - if (unlikely(!page)) - return NULL; - frag = page_address(page); - } - - *dma_addr = nfp_net_dma_map_rx(dp, frag); - if (dma_mapping_error(dp->dev, *dma_addr)) { - nfp_net_free_frag(frag, dp->xdp_prog); - nn_dp_warn(dp, "Failed to map DMA RX buffer\n"); - return NULL; - } - - return frag; -} - -/** - * nfp_net_rx_give_one() - Put mapped skb on the software and hardware rings - * @dp: NFP Net data path struct - * @rx_ring: RX ring structure - * @frag: page fragment buffer - * @dma_addr: DMA address of skb mapping - */ -static void nfp_net_rx_give_one(const struct nfp_net_dp *dp, - struct nfp_net_rx_ring *rx_ring, - void *frag, dma_addr_t dma_addr) -{ - unsigned int wr_idx; - - wr_idx = D_IDX(rx_ring, rx_ring->wr_p); - - nfp_net_dma_sync_dev_rx(dp, dma_addr); - - /* Stash SKB and DMA address away */ - rx_ring->rxbufs[wr_idx].frag = frag; - rx_ring->rxbufs[wr_idx].dma_addr = dma_addr; - - /* Fill freelist descriptor */ - rx_ring->rxds[wr_idx].fld.reserved = 0; - rx_ring->rxds[wr_idx].fld.meta_len_dd = 0; - nfp_desc_set_dma_addr(&rx_ring->rxds[wr_idx].fld, - dma_addr + dp->rx_dma_off); - - rx_ring->wr_p++; - if (!(rx_ring->wr_p % NFP_NET_FL_BATCH)) { - /* Update write pointer of the freelist queue. Make - * sure all writes are flushed before telling the hardware. - */ - wmb(); - nfp_qcp_wr_ptr_add(rx_ring->qcp_fl, NFP_NET_FL_BATCH); - } -} - -/** - * nfp_net_rx_ring_reset() - Reflect in SW state of freelist after disable - * @rx_ring: RX ring structure - * - * Assumes that the device is stopped, must be idempotent. - */ -static void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring) -{ - unsigned int wr_idx, last_idx; - - /* wr_p == rd_p means ring was never fed FL bufs. RX rings are always - * kept at cnt - 1 FL bufs. - */ - if (rx_ring->wr_p == 0 && rx_ring->rd_p == 0) - return; - - /* Move the empty entry to the end of the list */ - wr_idx = D_IDX(rx_ring, rx_ring->wr_p); - last_idx = rx_ring->cnt - 1; - if (rx_ring->r_vec->xsk_pool) { - rx_ring->xsk_rxbufs[wr_idx] = rx_ring->xsk_rxbufs[last_idx]; - memset(&rx_ring->xsk_rxbufs[last_idx], 0, - sizeof(*rx_ring->xsk_rxbufs)); - } else { - rx_ring->rxbufs[wr_idx] = rx_ring->rxbufs[last_idx]; - memset(&rx_ring->rxbufs[last_idx], 0, sizeof(*rx_ring->rxbufs)); - } - - memset(rx_ring->rxds, 0, rx_ring->size); - rx_ring->wr_p = 0; - rx_ring->rd_p = 0; -} - -/** - * nfp_net_rx_ring_bufs_free() - Free any buffers currently on the RX ring - * @dp: NFP Net data path struct - * @rx_ring: RX ring to remove buffers from - * - * Assumes that the device is stopped and buffers are in [0, ring->cnt - 1) - * entries. After device is disabled nfp_net_rx_ring_reset() must be called - * to restore required ring geometry. - */ -static void -nfp_net_rx_ring_bufs_free(struct nfp_net_dp *dp, - struct nfp_net_rx_ring *rx_ring) -{ - unsigned int i; - - if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) - return; - - for (i = 0; i < rx_ring->cnt - 1; i++) { - /* NULL skb can only happen when initial filling of the ring - * fails to allocate enough buffers and calls here to free - * already allocated ones. - */ - if (!rx_ring->rxbufs[i].frag) - continue; - - nfp_net_dma_unmap_rx(dp, rx_ring->rxbufs[i].dma_addr); - nfp_net_free_frag(rx_ring->rxbufs[i].frag, dp->xdp_prog); - rx_ring->rxbufs[i].dma_addr = 0; - rx_ring->rxbufs[i].frag = NULL; - } -} - -/** - * nfp_net_rx_ring_bufs_alloc() - Fill RX ring with buffers (don't give to FW) - * @dp: NFP Net data path struct - * @rx_ring: RX ring to remove buffers from - */ -static int -nfp_net_rx_ring_bufs_alloc(struct nfp_net_dp *dp, - struct nfp_net_rx_ring *rx_ring) -{ - struct nfp_net_rx_buf *rxbufs; - unsigned int i; - - if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) - return 0; - - rxbufs = rx_ring->rxbufs; - - for (i = 0; i < rx_ring->cnt - 1; i++) { - rxbufs[i].frag = nfp_net_rx_alloc_one(dp, &rxbufs[i].dma_addr); - if (!rxbufs[i].frag) { - nfp_net_rx_ring_bufs_free(dp, rx_ring); - return -ENOMEM; - } - } - - return 0; -} - -/** - * nfp_net_rx_ring_fill_freelist() - Give buffers from the ring to FW - * @dp: NFP Net data path struct - * @rx_ring: RX ring to fill - */ -static void -nfp_net_rx_ring_fill_freelist(struct nfp_net_dp *dp, - struct nfp_net_rx_ring *rx_ring) -{ - unsigned int i; - - if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) - return nfp_net_xsk_rx_ring_fill_freelist(rx_ring); - - for (i = 0; i < rx_ring->cnt - 1; i++) - nfp_net_rx_give_one(dp, rx_ring, rx_ring->rxbufs[i].frag, - rx_ring->rxbufs[i].dma_addr); -} - -/** - * nfp_net_rx_csum_has_errors() - group check if rxd has any csum errors - * @flags: RX descriptor flags field in CPU byte order - */ -static int nfp_net_rx_csum_has_errors(u16 flags) -{ - u16 csum_all_checked, csum_all_ok; - - csum_all_checked = flags & __PCIE_DESC_RX_CSUM_ALL; - csum_all_ok = flags & __PCIE_DESC_RX_CSUM_ALL_OK; - - return csum_all_checked != (csum_all_ok << PCIE_DESC_RX_CSUM_OK_SHIFT); -} - -/** - * nfp_net_rx_csum() - set SKB checksum field based on RX descriptor flags - * @dp: NFP Net data path struct - * @r_vec: per-ring structure - * @rxd: Pointer to RX descriptor - * @meta: Parsed metadata prepend - * @skb: Pointer to SKB - */ -void nfp_net_rx_csum(const struct nfp_net_dp *dp, - struct nfp_net_r_vector *r_vec, - const struct nfp_net_rx_desc *rxd, - const struct nfp_meta_parsed *meta, struct sk_buff *skb) -{ - skb_checksum_none_assert(skb); - - if (!(dp->netdev->features & NETIF_F_RXCSUM)) - return; - - if (meta->csum_type) { - skb->ip_summed = meta->csum_type; - skb->csum = meta->csum; - u64_stats_update_begin(&r_vec->rx_sync); - r_vec->hw_csum_rx_complete++; - u64_stats_update_end(&r_vec->rx_sync); - return; - } - - if (nfp_net_rx_csum_has_errors(le16_to_cpu(rxd->rxd.flags))) { - u64_stats_update_begin(&r_vec->rx_sync); - r_vec->hw_csum_rx_error++; - u64_stats_update_end(&r_vec->rx_sync); - return; - } - - /* Assume that the firmware will never report inner CSUM_OK unless outer - * L4 headers were successfully parsed. FW will always report zero UDP - * checksum as CSUM_OK. - */ - if (rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM_OK || - rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM_OK) { - __skb_incr_checksum_unnecessary(skb); - u64_stats_update_begin(&r_vec->rx_sync); - r_vec->hw_csum_rx_ok++; - u64_stats_update_end(&r_vec->rx_sync); - } - - if (rxd->rxd.flags & PCIE_DESC_RX_I_TCP_CSUM_OK || - rxd->rxd.flags & PCIE_DESC_RX_I_UDP_CSUM_OK) { - __skb_incr_checksum_unnecessary(skb); - u64_stats_update_begin(&r_vec->rx_sync); - r_vec->hw_csum_rx_inner_ok++; - u64_stats_update_end(&r_vec->rx_sync); - } -} - -static void -nfp_net_set_hash(struct net_device *netdev, struct nfp_meta_parsed *meta, - unsigned int type, __be32 *hash) -{ - if (!(netdev->features & NETIF_F_RXHASH)) - return; - - switch (type) { - case NFP_NET_RSS_IPV4: - case NFP_NET_RSS_IPV6: - case NFP_NET_RSS_IPV6_EX: - meta->hash_type = PKT_HASH_TYPE_L3; - break; - default: - meta->hash_type = PKT_HASH_TYPE_L4; - break; - } - - meta->hash = get_unaligned_be32(hash); -} - -static void -nfp_net_set_hash_desc(struct net_device *netdev, struct nfp_meta_parsed *meta, - void *data, struct nfp_net_rx_desc *rxd) -{ - struct nfp_net_rx_hash *rx_hash = data; - - if (!(rxd->rxd.flags & PCIE_DESC_RX_RSS)) - return; - - nfp_net_set_hash(netdev, meta, get_unaligned_be32(&rx_hash->hash_type), - &rx_hash->hash); -} - -bool -nfp_net_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta, - void *data, void *pkt, unsigned int pkt_len, int meta_len) -{ - u32 meta_info; - - meta_info = get_unaligned_be32(data); - data += 4; - - while (meta_info) { - switch (meta_info & NFP_NET_META_FIELD_MASK) { - case NFP_NET_META_HASH: - meta_info >>= NFP_NET_META_FIELD_SIZE; - nfp_net_set_hash(netdev, meta, - meta_info & NFP_NET_META_FIELD_MASK, - (__be32 *)data); - data += 4; - break; - case NFP_NET_META_MARK: - meta->mark = get_unaligned_be32(data); - data += 4; - break; - case NFP_NET_META_PORTID: - meta->portid = get_unaligned_be32(data); - data += 4; - break; - case NFP_NET_META_CSUM: - meta->csum_type = CHECKSUM_COMPLETE; - meta->csum = - (__force __wsum)__get_unaligned_cpu32(data); - data += 4; - break; - case NFP_NET_META_RESYNC_INFO: - if (nfp_net_tls_rx_resync_req(netdev, data, pkt, - pkt_len)) - return false; - data += sizeof(struct nfp_net_tls_resync_req); - break; - default: - return true; - } - - meta_info >>= NFP_NET_META_FIELD_SIZE; - } - - return data != pkt; -} - -static void -nfp_net_rx_drop(const struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, - struct nfp_net_rx_ring *rx_ring, struct nfp_net_rx_buf *rxbuf, - struct sk_buff *skb) -{ - u64_stats_update_begin(&r_vec->rx_sync); - r_vec->rx_drops++; - /* If we have both skb and rxbuf the replacement buffer allocation - * must have failed, count this as an alloc failure. - */ - if (skb && rxbuf) - r_vec->rx_replace_buf_alloc_fail++; - u64_stats_update_end(&r_vec->rx_sync); - - /* skb is build based on the frag, free_skb() would free the frag - * so to be able to reuse it we need an extra ref. - */ - if (skb && rxbuf && skb->head == rxbuf->frag) - page_ref_inc(virt_to_head_page(rxbuf->frag)); - if (rxbuf) - nfp_net_rx_give_one(dp, rx_ring, rxbuf->frag, rxbuf->dma_addr); - if (skb) - dev_kfree_skb_any(skb); -} - -static bool -nfp_net_tx_xdp_buf(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring, - struct nfp_net_tx_ring *tx_ring, - struct nfp_net_rx_buf *rxbuf, unsigned int dma_off, - unsigned int pkt_len, bool *completed) -{ - unsigned int dma_map_sz = dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA; - struct nfp_net_tx_buf *txbuf; - struct nfp_net_tx_desc *txd; - int wr_idx; - - /* Reject if xdp_adjust_tail grow packet beyond DMA area */ - if (pkt_len + dma_off > dma_map_sz) - return false; - - if (unlikely(nfp_net_tx_full(tx_ring, 1))) { - if (!*completed) { - nfp_net_xdp_complete(tx_ring); - *completed = true; - } - - if (unlikely(nfp_net_tx_full(tx_ring, 1))) { - nfp_net_rx_drop(dp, rx_ring->r_vec, rx_ring, rxbuf, - NULL); - return false; - } - } - - wr_idx = D_IDX(tx_ring, tx_ring->wr_p); - - /* Stash the soft descriptor of the head then initialize it */ - txbuf = &tx_ring->txbufs[wr_idx]; - - nfp_net_rx_give_one(dp, rx_ring, txbuf->frag, txbuf->dma_addr); - - txbuf->frag = rxbuf->frag; - txbuf->dma_addr = rxbuf->dma_addr; - txbuf->fidx = -1; - txbuf->pkt_cnt = 1; - txbuf->real_len = pkt_len; - - dma_sync_single_for_device(dp->dev, rxbuf->dma_addr + dma_off, - pkt_len, DMA_BIDIRECTIONAL); - - /* Build TX descriptor */ - txd = &tx_ring->txds[wr_idx]; - txd->offset_eop = PCIE_DESC_TX_EOP; - txd->dma_len = cpu_to_le16(pkt_len); - nfp_desc_set_dma_addr(txd, rxbuf->dma_addr + dma_off); - txd->data_len = cpu_to_le16(pkt_len); - - txd->flags = 0; - txd->mss = 0; - txd->lso_hdrlen = 0; - - tx_ring->wr_p++; - tx_ring->wr_ptr_add++; - return true; -} - -/** - * nfp_net_rx() - receive up to @budget packets on @rx_ring - * @rx_ring: RX ring to receive from - * @budget: NAPI budget - * - * Note, this function is separated out from the napi poll function to - * more cleanly separate packet receive code from other bookkeeping - * functions performed in the napi poll function. - * - * Return: Number of packets received. - */ -static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget) -{ - struct nfp_net_r_vector *r_vec = rx_ring->r_vec; - struct nfp_net_dp *dp = &r_vec->nfp_net->dp; - struct nfp_net_tx_ring *tx_ring; - struct bpf_prog *xdp_prog; - bool xdp_tx_cmpl = false; - unsigned int true_bufsz; - struct sk_buff *skb; - int pkts_polled = 0; - struct xdp_buff xdp; - int idx; - - xdp_prog = READ_ONCE(dp->xdp_prog); - true_bufsz = xdp_prog ? PAGE_SIZE : dp->fl_bufsz; - xdp_init_buff(&xdp, PAGE_SIZE - NFP_NET_RX_BUF_HEADROOM, - &rx_ring->xdp_rxq); - tx_ring = r_vec->xdp_ring; - - while (pkts_polled < budget) { - unsigned int meta_len, data_len, meta_off, pkt_len, pkt_off; - struct nfp_net_rx_buf *rxbuf; - struct nfp_net_rx_desc *rxd; - struct nfp_meta_parsed meta; - bool redir_egress = false; - struct net_device *netdev; - dma_addr_t new_dma_addr; - u32 meta_len_xdp = 0; - void *new_frag; - - idx = D_IDX(rx_ring, rx_ring->rd_p); - - rxd = &rx_ring->rxds[idx]; - if (!(rxd->rxd.meta_len_dd & PCIE_DESC_RX_DD)) - break; - - /* Memory barrier to ensure that we won't do other reads - * before the DD bit. - */ - dma_rmb(); - - memset(&meta, 0, sizeof(meta)); - - rx_ring->rd_p++; - pkts_polled++; - - rxbuf = &rx_ring->rxbufs[idx]; - /* < meta_len > - * <-- [rx_offset] --> - * --------------------------------------------------------- - * | [XX] | metadata | packet | XXXX | - * --------------------------------------------------------- - * <---------------- data_len ---------------> - * - * The rx_offset is fixed for all packets, the meta_len can vary - * on a packet by packet basis. If rx_offset is set to zero - * (_RX_OFFSET_DYNAMIC) metadata starts at the beginning of the - * buffer and is immediately followed by the packet (no [XX]). - */ - meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK; - data_len = le16_to_cpu(rxd->rxd.data_len); - pkt_len = data_len - meta_len; - - pkt_off = NFP_NET_RX_BUF_HEADROOM + dp->rx_dma_off; - if (dp->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC) - pkt_off += meta_len; - else - pkt_off += dp->rx_offset; - meta_off = pkt_off - meta_len; - - /* Stats update */ - u64_stats_update_begin(&r_vec->rx_sync); - r_vec->rx_pkts++; - r_vec->rx_bytes += pkt_len; - u64_stats_update_end(&r_vec->rx_sync); - - if (unlikely(meta_len > NFP_NET_MAX_PREPEND || - (dp->rx_offset && meta_len > dp->rx_offset))) { - nn_dp_warn(dp, "oversized RX packet metadata %u\n", - meta_len); - nfp_net_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); - continue; - } - - nfp_net_dma_sync_cpu_rx(dp, rxbuf->dma_addr + meta_off, - data_len); - - if (!dp->chained_metadata_format) { - nfp_net_set_hash_desc(dp->netdev, &meta, - rxbuf->frag + meta_off, rxd); - } else if (meta_len) { - if (unlikely(nfp_net_parse_meta(dp->netdev, &meta, - rxbuf->frag + meta_off, - rxbuf->frag + pkt_off, - pkt_len, meta_len))) { - nn_dp_warn(dp, "invalid RX packet metadata\n"); - nfp_net_rx_drop(dp, r_vec, rx_ring, rxbuf, - NULL); - continue; - } - } - - if (xdp_prog && !meta.portid) { - void *orig_data = rxbuf->frag + pkt_off; - unsigned int dma_off; - int act; - - xdp_prepare_buff(&xdp, - rxbuf->frag + NFP_NET_RX_BUF_HEADROOM, - pkt_off - NFP_NET_RX_BUF_HEADROOM, - pkt_len, true); - - act = bpf_prog_run_xdp(xdp_prog, &xdp); - - pkt_len = xdp.data_end - xdp.data; - pkt_off += xdp.data - orig_data; - - switch (act) { - case XDP_PASS: - meta_len_xdp = xdp.data - xdp.data_meta; - break; - case XDP_TX: - dma_off = pkt_off - NFP_NET_RX_BUF_HEADROOM; - if (unlikely(!nfp_net_tx_xdp_buf(dp, rx_ring, - tx_ring, rxbuf, - dma_off, - pkt_len, - &xdp_tx_cmpl))) - trace_xdp_exception(dp->netdev, - xdp_prog, act); - continue; - default: - bpf_warn_invalid_xdp_action(dp->netdev, xdp_prog, act); - fallthrough; - case XDP_ABORTED: - trace_xdp_exception(dp->netdev, xdp_prog, act); - fallthrough; - case XDP_DROP: - nfp_net_rx_give_one(dp, rx_ring, rxbuf->frag, - rxbuf->dma_addr); - continue; - } - } - - if (likely(!meta.portid)) { - netdev = dp->netdev; - } else if (meta.portid == NFP_META_PORT_ID_CTRL) { - struct nfp_net *nn = netdev_priv(dp->netdev); - - nfp_app_ctrl_rx_raw(nn->app, rxbuf->frag + pkt_off, - pkt_len); - nfp_net_rx_give_one(dp, rx_ring, rxbuf->frag, - rxbuf->dma_addr); - continue; - } else { - struct nfp_net *nn; - - nn = netdev_priv(dp->netdev); - netdev = nfp_app_dev_get(nn->app, meta.portid, - &redir_egress); - if (unlikely(!netdev)) { - nfp_net_rx_drop(dp, r_vec, rx_ring, rxbuf, - NULL); - continue; - } - - if (nfp_netdev_is_nfp_repr(netdev)) - nfp_repr_inc_rx_stats(netdev, pkt_len); - } - - skb = build_skb(rxbuf->frag, true_bufsz); - if (unlikely(!skb)) { - nfp_net_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); - continue; - } - new_frag = nfp_net_napi_alloc_one(dp, &new_dma_addr); - if (unlikely(!new_frag)) { - nfp_net_rx_drop(dp, r_vec, rx_ring, rxbuf, skb); - continue; - } - - nfp_net_dma_unmap_rx(dp, rxbuf->dma_addr); - - nfp_net_rx_give_one(dp, rx_ring, new_frag, new_dma_addr); - - skb_reserve(skb, pkt_off); - skb_put(skb, pkt_len); - - skb->mark = meta.mark; - skb_set_hash(skb, meta.hash, meta.hash_type); - - skb_record_rx_queue(skb, rx_ring->idx); - skb->protocol = eth_type_trans(skb, netdev); - - nfp_net_rx_csum(dp, r_vec, rxd, &meta, skb); - -#ifdef CONFIG_TLS_DEVICE - if (rxd->rxd.flags & PCIE_DESC_RX_DECRYPTED) { - skb->decrypted = true; - u64_stats_update_begin(&r_vec->rx_sync); - r_vec->hw_tls_rx++; - u64_stats_update_end(&r_vec->rx_sync); - } -#endif - - if (rxd->rxd.flags & PCIE_DESC_RX_VLAN) - __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), - le16_to_cpu(rxd->rxd.vlan)); - if (meta_len_xdp) - skb_metadata_set(skb, meta_len_xdp); - - if (likely(!redir_egress)) { - napi_gro_receive(&rx_ring->r_vec->napi, skb); - } else { - skb->dev = netdev; - skb_reset_network_header(skb); - __skb_push(skb, ETH_HLEN); - dev_queue_xmit(skb); - } - } - - if (xdp_prog) { - if (tx_ring->wr_ptr_add) - nfp_net_tx_xmit_more_flush(tx_ring); - else if (unlikely(tx_ring->wr_p != tx_ring->rd_p) && - !xdp_tx_cmpl) - if (!nfp_net_xdp_complete(tx_ring)) - pkts_polled = budget; - } - - return pkts_polled; -} - -/** - * nfp_net_poll() - napi poll function - * @napi: NAPI structure - * @budget: NAPI budget - * - * Return: number of packets polled. - */ -static int nfp_net_poll(struct napi_struct *napi, int budget) -{ - struct nfp_net_r_vector *r_vec = - container_of(napi, struct nfp_net_r_vector, napi); - unsigned int pkts_polled = 0; - - if (r_vec->tx_ring) - nfp_net_tx_complete(r_vec->tx_ring, budget); - if (r_vec->rx_ring) - pkts_polled = nfp_net_rx(r_vec->rx_ring, budget); - - if (pkts_polled < budget) - if (napi_complete_done(napi, pkts_polled)) - nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry); - - if (r_vec->nfp_net->rx_coalesce_adapt_on && r_vec->rx_ring) { - struct dim_sample dim_sample = {}; - unsigned int start; - u64 pkts, bytes; - - do { - start = u64_stats_fetch_begin(&r_vec->rx_sync); - pkts = r_vec->rx_pkts; - bytes = r_vec->rx_bytes; - } while (u64_stats_fetch_retry(&r_vec->rx_sync, start)); - - dim_update_sample(r_vec->event_ctr, pkts, bytes, &dim_sample); - net_dim(&r_vec->rx_dim, dim_sample); - } - - if (r_vec->nfp_net->tx_coalesce_adapt_on && r_vec->tx_ring) { - struct dim_sample dim_sample = {}; - unsigned int start; - u64 pkts, bytes; - - do { - start = u64_stats_fetch_begin(&r_vec->tx_sync); - pkts = r_vec->tx_pkts; - bytes = r_vec->tx_bytes; - } while (u64_stats_fetch_retry(&r_vec->tx_sync, start)); - - dim_update_sample(r_vec->event_ctr, pkts, bytes, &dim_sample); - net_dim(&r_vec->tx_dim, dim_sample); - } - - return pkts_polled; -} - -/* Control device data path - */ - -static bool -nfp_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, - struct sk_buff *skb, bool old) -{ - unsigned int real_len = skb->len, meta_len = 0; - struct nfp_net_tx_ring *tx_ring; - struct nfp_net_tx_buf *txbuf; - struct nfp_net_tx_desc *txd; - struct nfp_net_dp *dp; - dma_addr_t dma_addr; - int wr_idx; - - dp = &r_vec->nfp_net->dp; - tx_ring = r_vec->tx_ring; - - if (WARN_ON_ONCE(skb_shinfo(skb)->nr_frags)) { - nn_dp_warn(dp, "Driver's CTRL TX does not implement gather\n"); - goto err_free; - } - - if (unlikely(nfp_net_tx_full(tx_ring, 1))) { - u64_stats_update_begin(&r_vec->tx_sync); - r_vec->tx_busy++; - u64_stats_update_end(&r_vec->tx_sync); - if (!old) - __skb_queue_tail(&r_vec->queue, skb); - else - __skb_queue_head(&r_vec->queue, skb); - return true; - } - - if (nfp_app_ctrl_has_meta(nn->app)) { - if (unlikely(skb_headroom(skb) < 8)) { - nn_dp_warn(dp, "CTRL TX on skb without headroom\n"); - goto err_free; - } - meta_len = 8; - put_unaligned_be32(NFP_META_PORT_ID_CTRL, skb_push(skb, 4)); - put_unaligned_be32(NFP_NET_META_PORTID, skb_push(skb, 4)); - } - - /* Start with the head skbuf */ - dma_addr = dma_map_single(dp->dev, skb->data, skb_headlen(skb), - DMA_TO_DEVICE); - if (dma_mapping_error(dp->dev, dma_addr)) - goto err_dma_warn; - - wr_idx = D_IDX(tx_ring, tx_ring->wr_p); - - /* Stash the soft descriptor of the head then initialize it */ - txbuf = &tx_ring->txbufs[wr_idx]; - txbuf->skb = skb; - txbuf->dma_addr = dma_addr; - txbuf->fidx = -1; - txbuf->pkt_cnt = 1; - txbuf->real_len = real_len; - - /* Build TX descriptor */ - txd = &tx_ring->txds[wr_idx]; - txd->offset_eop = meta_len | PCIE_DESC_TX_EOP; - txd->dma_len = cpu_to_le16(skb_headlen(skb)); - nfp_desc_set_dma_addr(txd, dma_addr); - txd->data_len = cpu_to_le16(skb->len); - - txd->flags = 0; - txd->mss = 0; - txd->lso_hdrlen = 0; - - tx_ring->wr_p++; - tx_ring->wr_ptr_add++; - nfp_net_tx_xmit_more_flush(tx_ring); - - return false; - -err_dma_warn: - nn_dp_warn(dp, "Failed to DMA map TX CTRL buffer\n"); -err_free: - u64_stats_update_begin(&r_vec->tx_sync); - r_vec->tx_errors++; - u64_stats_update_end(&r_vec->tx_sync); - dev_kfree_skb_any(skb); - return false; -} - -bool __nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) -{ - struct nfp_net_r_vector *r_vec = &nn->r_vecs[0]; - - return nfp_ctrl_tx_one(nn, r_vec, skb, false); -} - -bool nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) -{ - struct nfp_net_r_vector *r_vec = &nn->r_vecs[0]; - bool ret; - - spin_lock_bh(&r_vec->lock); - ret = nfp_ctrl_tx_one(nn, r_vec, skb, false); - spin_unlock_bh(&r_vec->lock); - - return ret; -} - -static void __nfp_ctrl_tx_queued(struct nfp_net_r_vector *r_vec) -{ - struct sk_buff *skb; - - while ((skb = __skb_dequeue(&r_vec->queue))) - if (nfp_ctrl_tx_one(r_vec->nfp_net, r_vec, skb, true)) - return; -} - -static bool -nfp_ctrl_meta_ok(struct nfp_net *nn, void *data, unsigned int meta_len) -{ - u32 meta_type, meta_tag; - - if (!nfp_app_ctrl_has_meta(nn->app)) - return !meta_len; - - if (meta_len != 8) - return false; - - meta_type = get_unaligned_be32(data); - meta_tag = get_unaligned_be32(data + 4); - - return (meta_type == NFP_NET_META_PORTID && - meta_tag == NFP_META_PORT_ID_CTRL); -} - -static bool -nfp_ctrl_rx_one(struct nfp_net *nn, struct nfp_net_dp *dp, - struct nfp_net_r_vector *r_vec, struct nfp_net_rx_ring *rx_ring) -{ - unsigned int meta_len, data_len, meta_off, pkt_len, pkt_off; - struct nfp_net_rx_buf *rxbuf; - struct nfp_net_rx_desc *rxd; - dma_addr_t new_dma_addr; - struct sk_buff *skb; - void *new_frag; - int idx; - - idx = D_IDX(rx_ring, rx_ring->rd_p); - - rxd = &rx_ring->rxds[idx]; - if (!(rxd->rxd.meta_len_dd & PCIE_DESC_RX_DD)) - return false; - - /* Memory barrier to ensure that we won't do other reads - * before the DD bit. - */ - dma_rmb(); - - rx_ring->rd_p++; - - rxbuf = &rx_ring->rxbufs[idx]; - meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK; - data_len = le16_to_cpu(rxd->rxd.data_len); - pkt_len = data_len - meta_len; - - pkt_off = NFP_NET_RX_BUF_HEADROOM + dp->rx_dma_off; - if (dp->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC) - pkt_off += meta_len; - else - pkt_off += dp->rx_offset; - meta_off = pkt_off - meta_len; - - /* Stats update */ - u64_stats_update_begin(&r_vec->rx_sync); - r_vec->rx_pkts++; - r_vec->rx_bytes += pkt_len; - u64_stats_update_end(&r_vec->rx_sync); - - nfp_net_dma_sync_cpu_rx(dp, rxbuf->dma_addr + meta_off, data_len); - - if (unlikely(!nfp_ctrl_meta_ok(nn, rxbuf->frag + meta_off, meta_len))) { - nn_dp_warn(dp, "incorrect metadata for ctrl packet (%d)\n", - meta_len); - nfp_net_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); - return true; - } - - skb = build_skb(rxbuf->frag, dp->fl_bufsz); - if (unlikely(!skb)) { - nfp_net_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); - return true; - } - new_frag = nfp_net_napi_alloc_one(dp, &new_dma_addr); - if (unlikely(!new_frag)) { - nfp_net_rx_drop(dp, r_vec, rx_ring, rxbuf, skb); - return true; - } - - nfp_net_dma_unmap_rx(dp, rxbuf->dma_addr); - - nfp_net_rx_give_one(dp, rx_ring, new_frag, new_dma_addr); - - skb_reserve(skb, pkt_off); - skb_put(skb, pkt_len); - - nfp_app_ctrl_rx(nn->app, skb); - - return true; -} - -static bool nfp_ctrl_rx(struct nfp_net_r_vector *r_vec) -{ - struct nfp_net_rx_ring *rx_ring = r_vec->rx_ring; - struct nfp_net *nn = r_vec->nfp_net; - struct nfp_net_dp *dp = &nn->dp; - unsigned int budget = 512; - - while (nfp_ctrl_rx_one(nn, dp, r_vec, rx_ring) && budget--) - continue; - - return budget; -} - -static void nfp_ctrl_poll(struct tasklet_struct *t) -{ - struct nfp_net_r_vector *r_vec = from_tasklet(r_vec, t, tasklet); - - spin_lock(&r_vec->lock); - nfp_net_tx_complete(r_vec->tx_ring, 0); - __nfp_ctrl_tx_queued(r_vec); - spin_unlock(&r_vec->lock); - - if (nfp_ctrl_rx(r_vec)) { - nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry); - } else { - tasklet_schedule(&r_vec->tasklet); - nn_dp_warn(&r_vec->nfp_net->dp, - "control message budget exceeded!\n"); - } -} - -/* Setup and Configuration - */ - -/** - * nfp_net_vecs_init() - Assign IRQs and setup rvecs. - * @nn: NFP Network structure - */ -static void nfp_net_vecs_init(struct nfp_net *nn) -{ - struct nfp_net_r_vector *r_vec; - int r; - - nn->lsc_handler = nfp_net_irq_lsc; - nn->exn_handler = nfp_net_irq_exn; - - for (r = 0; r < nn->max_r_vecs; r++) { - struct msix_entry *entry; - - entry = &nn->irq_entries[NFP_NET_NON_Q_VECTORS + r]; - - r_vec = &nn->r_vecs[r]; - r_vec->nfp_net = nn; - r_vec->irq_entry = entry->entry; - r_vec->irq_vector = entry->vector; - - if (nn->dp.netdev) { - r_vec->handler = nfp_net_irq_rxtx; - } else { - r_vec->handler = nfp_ctrl_irq_rxtx; - - __skb_queue_head_init(&r_vec->queue); - spin_lock_init(&r_vec->lock); - tasklet_setup(&r_vec->tasklet, nfp_ctrl_poll); - tasklet_disable(&r_vec->tasklet); - } + __skb_queue_head_init(&r_vec->queue); + spin_lock_init(&r_vec->lock); + tasklet_setup(&r_vec->tasklet, nfp_nfd3_ctrl_poll); + tasklet_disable(&r_vec->tasklet); + } cpumask_set_cpu(r, &r_vec->affinity_mask); } } -/** - * nfp_net_tx_ring_free() - Free resources allocated to a TX ring - * @tx_ring: TX ring to free - */ -static void nfp_net_tx_ring_free(struct nfp_net_tx_ring *tx_ring) -{ - struct nfp_net_r_vector *r_vec = tx_ring->r_vec; - struct nfp_net_dp *dp = &r_vec->nfp_net->dp; - - kvfree(tx_ring->txbufs); - - if (tx_ring->txds) - dma_free_coherent(dp->dev, tx_ring->size, - tx_ring->txds, tx_ring->dma); - - tx_ring->cnt = 0; - tx_ring->txbufs = NULL; - tx_ring->txds = NULL; - tx_ring->dma = 0; - tx_ring->size = 0; -} - -/** - * nfp_net_tx_ring_alloc() - Allocate resource for a TX ring - * @dp: NFP Net data path struct - * @tx_ring: TX Ring structure to allocate - * - * Return: 0 on success, negative errno otherwise. - */ -static int -nfp_net_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) -{ - struct nfp_net_r_vector *r_vec = tx_ring->r_vec; - - tx_ring->cnt = dp->txd_cnt; - - tx_ring->size = array_size(tx_ring->cnt, sizeof(*tx_ring->txds)); - tx_ring->txds = dma_alloc_coherent(dp->dev, tx_ring->size, - &tx_ring->dma, - GFP_KERNEL | __GFP_NOWARN); - if (!tx_ring->txds) { - netdev_warn(dp->netdev, "failed to allocate TX descriptor ring memory, requested descriptor count: %d, consider lowering descriptor count\n", - tx_ring->cnt); - goto err_alloc; - } - - tx_ring->txbufs = kvcalloc(tx_ring->cnt, sizeof(*tx_ring->txbufs), - GFP_KERNEL); - if (!tx_ring->txbufs) - goto err_alloc; - - if (!tx_ring->is_xdp && dp->netdev) - netif_set_xps_queue(dp->netdev, &r_vec->affinity_mask, - tx_ring->idx); - - return 0; - -err_alloc: - nfp_net_tx_ring_free(tx_ring); - return -ENOMEM; -} - -static void -nfp_net_tx_ring_bufs_free(struct nfp_net_dp *dp, - struct nfp_net_tx_ring *tx_ring) -{ - unsigned int i; - - if (!tx_ring->is_xdp) - return; - - for (i = 0; i < tx_ring->cnt; i++) { - if (!tx_ring->txbufs[i].frag) - return; - - nfp_net_dma_unmap_rx(dp, tx_ring->txbufs[i].dma_addr); - __free_page(virt_to_page(tx_ring->txbufs[i].frag)); - } -} - -static int -nfp_net_tx_ring_bufs_alloc(struct nfp_net_dp *dp, - struct nfp_net_tx_ring *tx_ring) -{ - struct nfp_net_tx_buf *txbufs = tx_ring->txbufs; - unsigned int i; - - if (!tx_ring->is_xdp) - return 0; - - for (i = 0; i < tx_ring->cnt; i++) { - txbufs[i].frag = nfp_net_rx_alloc_one(dp, &txbufs[i].dma_addr); - if (!txbufs[i].frag) { - nfp_net_tx_ring_bufs_free(dp, tx_ring); - return -ENOMEM; - } - } - - return 0; -} - -static int nfp_net_tx_rings_prepare(struct nfp_net *nn, struct nfp_net_dp *dp) -{ - unsigned int r; - - dp->tx_rings = kcalloc(dp->num_tx_rings, sizeof(*dp->tx_rings), - GFP_KERNEL); - if (!dp->tx_rings) - return -ENOMEM; - - for (r = 0; r < dp->num_tx_rings; r++) { - int bias = 0; - - if (r >= dp->num_stack_tx_rings) - bias = dp->num_stack_tx_rings; - - nfp_net_tx_ring_init(&dp->tx_rings[r], &nn->r_vecs[r - bias], - r, bias); - - if (nfp_net_tx_ring_alloc(dp, &dp->tx_rings[r])) - goto err_free_prev; - - if (nfp_net_tx_ring_bufs_alloc(dp, &dp->tx_rings[r])) - goto err_free_ring; - } - - return 0; - -err_free_prev: - while (r--) { - nfp_net_tx_ring_bufs_free(dp, &dp->tx_rings[r]); -err_free_ring: - nfp_net_tx_ring_free(&dp->tx_rings[r]); - } - kfree(dp->tx_rings); - return -ENOMEM; -} - -static void nfp_net_tx_rings_free(struct nfp_net_dp *dp) -{ - unsigned int r; - - for (r = 0; r < dp->num_tx_rings; r++) { - nfp_net_tx_ring_bufs_free(dp, &dp->tx_rings[r]); - nfp_net_tx_ring_free(&dp->tx_rings[r]); - } - - kfree(dp->tx_rings); -} - -/** - * nfp_net_rx_ring_free() - Free resources allocated to a RX ring - * @rx_ring: RX ring to free - */ -static void nfp_net_rx_ring_free(struct nfp_net_rx_ring *rx_ring) -{ - struct nfp_net_r_vector *r_vec = rx_ring->r_vec; - struct nfp_net_dp *dp = &r_vec->nfp_net->dp; - - if (dp->netdev) - xdp_rxq_info_unreg(&rx_ring->xdp_rxq); - - if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) - kvfree(rx_ring->xsk_rxbufs); - else - kvfree(rx_ring->rxbufs); - - if (rx_ring->rxds) - dma_free_coherent(dp->dev, rx_ring->size, - rx_ring->rxds, rx_ring->dma); - - rx_ring->cnt = 0; - rx_ring->rxbufs = NULL; - rx_ring->xsk_rxbufs = NULL; - rx_ring->rxds = NULL; - rx_ring->dma = 0; - rx_ring->size = 0; -} - -/** - * nfp_net_rx_ring_alloc() - Allocate resource for a RX ring - * @dp: NFP Net data path struct - * @rx_ring: RX ring to allocate - * - * Return: 0 on success, negative errno otherwise. - */ -static int -nfp_net_rx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring) -{ - enum xdp_mem_type mem_type; - size_t rxbuf_sw_desc_sz; - int err; - - if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) { - mem_type = MEM_TYPE_XSK_BUFF_POOL; - rxbuf_sw_desc_sz = sizeof(*rx_ring->xsk_rxbufs); - } else { - mem_type = MEM_TYPE_PAGE_ORDER0; - rxbuf_sw_desc_sz = sizeof(*rx_ring->rxbufs); - } - - if (dp->netdev) { - err = xdp_rxq_info_reg(&rx_ring->xdp_rxq, dp->netdev, - rx_ring->idx, rx_ring->r_vec->napi.napi_id); - if (err < 0) - return err; - - err = xdp_rxq_info_reg_mem_model(&rx_ring->xdp_rxq, - mem_type, NULL); - if (err) - goto err_alloc; - } - - rx_ring->cnt = dp->rxd_cnt; - rx_ring->size = array_size(rx_ring->cnt, sizeof(*rx_ring->rxds)); - rx_ring->rxds = dma_alloc_coherent(dp->dev, rx_ring->size, - &rx_ring->dma, - GFP_KERNEL | __GFP_NOWARN); - if (!rx_ring->rxds) { - netdev_warn(dp->netdev, "failed to allocate RX descriptor ring memory, requested descriptor count: %d, consider lowering descriptor count\n", - rx_ring->cnt); - goto err_alloc; - } - - if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) { - rx_ring->xsk_rxbufs = kvcalloc(rx_ring->cnt, rxbuf_sw_desc_sz, - GFP_KERNEL); - if (!rx_ring->xsk_rxbufs) - goto err_alloc; - } else { - rx_ring->rxbufs = kvcalloc(rx_ring->cnt, rxbuf_sw_desc_sz, - GFP_KERNEL); - if (!rx_ring->rxbufs) - goto err_alloc; - } - - return 0; - -err_alloc: - nfp_net_rx_ring_free(rx_ring); - return -ENOMEM; -} - -static int nfp_net_rx_rings_prepare(struct nfp_net *nn, struct nfp_net_dp *dp) -{ - unsigned int r; - - dp->rx_rings = kcalloc(dp->num_rx_rings, sizeof(*dp->rx_rings), - GFP_KERNEL); - if (!dp->rx_rings) - return -ENOMEM; - - for (r = 0; r < dp->num_rx_rings; r++) { - nfp_net_rx_ring_init(&dp->rx_rings[r], &nn->r_vecs[r], r); - - if (nfp_net_rx_ring_alloc(dp, &dp->rx_rings[r])) - goto err_free_prev; - - if (nfp_net_rx_ring_bufs_alloc(dp, &dp->rx_rings[r])) - goto err_free_ring; - } - - return 0; - -err_free_prev: - while (r--) { - nfp_net_rx_ring_bufs_free(dp, &dp->rx_rings[r]); -err_free_ring: - nfp_net_rx_ring_free(&dp->rx_rings[r]); - } - kfree(dp->rx_rings); - return -ENOMEM; -} - -static void nfp_net_rx_rings_free(struct nfp_net_dp *dp) -{ - unsigned int r; - - for (r = 0; r < dp->num_rx_rings; r++) { - nfp_net_rx_ring_bufs_free(dp, &dp->rx_rings[r]); - nfp_net_rx_ring_free(&dp->rx_rings[r]); - } - - kfree(dp->rx_rings); -} - static void nfp_net_napi_add(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, int idx) { if (dp->netdev) netif_napi_add(dp->netdev, &r_vec->napi, nfp_net_has_xsk_pool_slow(dp, idx) ? - nfp_net_xsk_poll : nfp_net_poll, + nfp_nfd3_xsk_poll : nfp_nfd3_poll, NAPI_POLL_WEIGHT); else tasklet_enable(&r_vec->tasklet); @@ -2858,17 +911,6 @@ static void nfp_net_write_mac_addr(struct nfp_net *nn, const u8 *addr) nn_writew(nn, NFP_NET_CFG_MACADDR + 6, get_unaligned_be16(addr + 4)); } -static void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx) -{ - nn_writeq(nn, NFP_NET_CFG_RXR_ADDR(idx), 0); - nn_writeb(nn, NFP_NET_CFG_RXR_SZ(idx), 0); - nn_writeb(nn, NFP_NET_CFG_RXR_VEC(idx), 0); - - nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(idx), 0); - nn_writeb(nn, NFP_NET_CFG_TXR_SZ(idx), 0); - nn_writeb(nn, NFP_NET_CFG_TXR_VEC(idx), 0); -} - /** * nfp_net_clear_config_and_disable() - Clear control BAR and disable NFP * @nn: NFP Net device to reconfigure @@ -2911,25 +953,6 @@ static void nfp_net_clear_config_and_disable(struct nfp_net *nn) nn->dp.ctrl = new_ctrl; } -static void -nfp_net_rx_ring_hw_cfg_write(struct nfp_net *nn, - struct nfp_net_rx_ring *rx_ring, unsigned int idx) -{ - /* Write the DMA address, size and MSI-X info to the device */ - nn_writeq(nn, NFP_NET_CFG_RXR_ADDR(idx), rx_ring->dma); - nn_writeb(nn, NFP_NET_CFG_RXR_SZ(idx), ilog2(rx_ring->cnt)); - nn_writeb(nn, NFP_NET_CFG_RXR_VEC(idx), rx_ring->r_vec->irq_entry); -} - -static void -nfp_net_tx_ring_hw_cfg_write(struct nfp_net *nn, - struct nfp_net_tx_ring *tx_ring, unsigned int idx) -{ - nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(idx), tx_ring->dma); - nn_writeb(nn, NFP_NET_CFG_TXR_SZ(idx), ilog2(tx_ring->cnt)); - nn_writeb(nn, NFP_NET_CFG_TXR_VEC(idx), tx_ring->r_vec->irq_entry); -} - /** * nfp_net_set_config_and_enable() - Write control BAR and enable NFP * @nn: NFP Net device to reconfigure @@ -3872,7 +1895,7 @@ const struct net_device_ops nfp_net_netdev_ops = { .ndo_uninit = nfp_app_ndo_uninit, .ndo_open = nfp_net_netdev_open, .ndo_stop = nfp_net_netdev_close, - .ndo_start_xmit = nfp_net_tx, + .ndo_start_xmit = nfp_nfd3_tx, .ndo_get_stats64 = nfp_net_stat64, .ndo_vlan_rx_add_vid = nfp_net_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = nfp_net_vlan_rx_kill_vid, diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c index 2c74b3c5aef9..59b852e18758 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c @@ -1,10 +1,11 @@ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) -/* Copyright (C) 2015-2018 Netronome Systems, Inc. */ +/* Copyright (C) 2015-2019 Netronome Systems, Inc. */ #include #include #include #include "nfp_net.h" +#include "nfp_net_dp.h" static struct dentry *nfp_dir; @@ -80,10 +81,8 @@ static int nfp_tx_q_show(struct seq_file *file, void *data) { struct nfp_net_r_vector *r_vec = file->private; struct nfp_net_tx_ring *tx_ring; - struct nfp_net_tx_desc *txd; - int d_rd_p, d_wr_p, txd_cnt; struct nfp_net *nn; - int i; + int d_rd_p, d_wr_p; rtnl_lock(); @@ -97,8 +96,6 @@ static int nfp_tx_q_show(struct seq_file *file, void *data) if (!nfp_net_running(nn)) goto out; - txd_cnt = tx_ring->cnt; - d_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); d_wr_p = nfp_qcp_wr_ptr_read(tx_ring->qcp_q); @@ -108,41 +105,8 @@ static int nfp_tx_q_show(struct seq_file *file, void *data) tx_ring->cnt, &tx_ring->dma, tx_ring->txds, tx_ring->rd_p, tx_ring->wr_p, d_rd_p, d_wr_p); - for (i = 0; i < txd_cnt; i++) { - struct xdp_buff *xdp; - struct sk_buff *skb; - - txd = &tx_ring->txds[i]; - seq_printf(file, "%04d: 0x%08x 0x%08x 0x%08x 0x%08x", i, - txd->vals[0], txd->vals[1], - txd->vals[2], txd->vals[3]); - - if (!tx_ring->is_xdp) { - skb = READ_ONCE(tx_ring->txbufs[i].skb); - if (skb) - seq_printf(file, " skb->head=%p skb->data=%p", - skb->head, skb->data); - } else { - xdp = READ_ONCE(tx_ring->txbufs[i].xdp); - if (xdp) - seq_printf(file, " xdp->data=%p", xdp->data); - } - - if (tx_ring->txbufs[i].dma_addr) - seq_printf(file, " dma_addr=%pad", - &tx_ring->txbufs[i].dma_addr); - - if (i == tx_ring->rd_p % txd_cnt) - seq_puts(file, " H_RD"); - if (i == tx_ring->wr_p % txd_cnt) - seq_puts(file, " H_WR"); - if (i == d_rd_p % txd_cnt) - seq_puts(file, " D_RD"); - if (i == d_wr_p % txd_cnt) - seq_puts(file, " D_WR"); - - seq_putc(file, '\n'); - } + nfp_net_debugfs_print_tx_descs(file, r_vec, tx_ring, + d_rd_p, d_wr_p); out: rtnl_unlock(); return 0; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c new file mode 100644 index 000000000000..8fe48569a612 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c @@ -0,0 +1,448 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2015-2019 Netronome Systems, Inc. */ + +#include "nfp_app.h" +#include "nfp_net_dp.h" +#include "nfp_net_xsk.h" + +/** + * nfp_net_rx_alloc_one() - Allocate and map page frag for RX + * @dp: NFP Net data path struct + * @dma_addr: Pointer to storage for DMA address (output param) + * + * This function will allcate a new page frag, map it for DMA. + * + * Return: allocated page frag or NULL on failure. + */ +void *nfp_net_rx_alloc_one(struct nfp_net_dp *dp, dma_addr_t *dma_addr) +{ + void *frag; + + if (!dp->xdp_prog) { + frag = netdev_alloc_frag(dp->fl_bufsz); + } else { + struct page *page; + + page = alloc_page(GFP_KERNEL); + frag = page ? page_address(page) : NULL; + } + if (!frag) { + nn_dp_warn(dp, "Failed to alloc receive page frag\n"); + return NULL; + } + + *dma_addr = nfp_net_dma_map_rx(dp, frag); + if (dma_mapping_error(dp->dev, *dma_addr)) { + nfp_net_free_frag(frag, dp->xdp_prog); + nn_dp_warn(dp, "Failed to map DMA RX buffer\n"); + return NULL; + } + + return frag; +} + +/** + * nfp_net_tx_ring_init() - Fill in the boilerplate for a TX ring + * @tx_ring: TX ring structure + * @r_vec: IRQ vector servicing this ring + * @idx: Ring index + * @is_xdp: Is this an XDP TX ring? + */ +static void +nfp_net_tx_ring_init(struct nfp_net_tx_ring *tx_ring, + struct nfp_net_r_vector *r_vec, unsigned int idx, + bool is_xdp) +{ + struct nfp_net *nn = r_vec->nfp_net; + + tx_ring->idx = idx; + tx_ring->r_vec = r_vec; + tx_ring->is_xdp = is_xdp; + u64_stats_init(&tx_ring->r_vec->tx_sync); + + tx_ring->qcidx = tx_ring->idx * nn->stride_tx; + tx_ring->qcp_q = nn->tx_bar + NFP_QCP_QUEUE_OFF(tx_ring->qcidx); +} + +/** + * nfp_net_rx_ring_init() - Fill in the boilerplate for a RX ring + * @rx_ring: RX ring structure + * @r_vec: IRQ vector servicing this ring + * @idx: Ring index + */ +static void +nfp_net_rx_ring_init(struct nfp_net_rx_ring *rx_ring, + struct nfp_net_r_vector *r_vec, unsigned int idx) +{ + struct nfp_net *nn = r_vec->nfp_net; + + rx_ring->idx = idx; + rx_ring->r_vec = r_vec; + u64_stats_init(&rx_ring->r_vec->rx_sync); + + rx_ring->fl_qcidx = rx_ring->idx * nn->stride_rx; + rx_ring->qcp_fl = nn->rx_bar + NFP_QCP_QUEUE_OFF(rx_ring->fl_qcidx); +} + +/** + * nfp_net_rx_ring_reset() - Reflect in SW state of freelist after disable + * @rx_ring: RX ring structure + * + * Assumes that the device is stopped, must be idempotent. + */ +void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring) +{ + unsigned int wr_idx, last_idx; + + /* wr_p == rd_p means ring was never fed FL bufs. RX rings are always + * kept at cnt - 1 FL bufs. + */ + if (rx_ring->wr_p == 0 && rx_ring->rd_p == 0) + return; + + /* Move the empty entry to the end of the list */ + wr_idx = D_IDX(rx_ring, rx_ring->wr_p); + last_idx = rx_ring->cnt - 1; + if (rx_ring->r_vec->xsk_pool) { + rx_ring->xsk_rxbufs[wr_idx] = rx_ring->xsk_rxbufs[last_idx]; + memset(&rx_ring->xsk_rxbufs[last_idx], 0, + sizeof(*rx_ring->xsk_rxbufs)); + } else { + rx_ring->rxbufs[wr_idx] = rx_ring->rxbufs[last_idx]; + memset(&rx_ring->rxbufs[last_idx], 0, sizeof(*rx_ring->rxbufs)); + } + + memset(rx_ring->rxds, 0, rx_ring->size); + rx_ring->wr_p = 0; + rx_ring->rd_p = 0; +} + +/** + * nfp_net_rx_ring_bufs_free() - Free any buffers currently on the RX ring + * @dp: NFP Net data path struct + * @rx_ring: RX ring to remove buffers from + * + * Assumes that the device is stopped and buffers are in [0, ring->cnt - 1) + * entries. After device is disabled nfp_net_rx_ring_reset() must be called + * to restore required ring geometry. + */ +static void +nfp_net_rx_ring_bufs_free(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring) +{ + unsigned int i; + + if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) + return; + + for (i = 0; i < rx_ring->cnt - 1; i++) { + /* NULL skb can only happen when initial filling of the ring + * fails to allocate enough buffers and calls here to free + * already allocated ones. + */ + if (!rx_ring->rxbufs[i].frag) + continue; + + nfp_net_dma_unmap_rx(dp, rx_ring->rxbufs[i].dma_addr); + nfp_net_free_frag(rx_ring->rxbufs[i].frag, dp->xdp_prog); + rx_ring->rxbufs[i].dma_addr = 0; + rx_ring->rxbufs[i].frag = NULL; + } +} + +/** + * nfp_net_rx_ring_bufs_alloc() - Fill RX ring with buffers (don't give to FW) + * @dp: NFP Net data path struct + * @rx_ring: RX ring to remove buffers from + */ +static int +nfp_net_rx_ring_bufs_alloc(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring) +{ + struct nfp_net_rx_buf *rxbufs; + unsigned int i; + + if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) + return 0; + + rxbufs = rx_ring->rxbufs; + + for (i = 0; i < rx_ring->cnt - 1; i++) { + rxbufs[i].frag = nfp_net_rx_alloc_one(dp, &rxbufs[i].dma_addr); + if (!rxbufs[i].frag) { + nfp_net_rx_ring_bufs_free(dp, rx_ring); + return -ENOMEM; + } + } + + return 0; +} + +int nfp_net_tx_rings_prepare(struct nfp_net *nn, struct nfp_net_dp *dp) +{ + unsigned int r; + + dp->tx_rings = kcalloc(dp->num_tx_rings, sizeof(*dp->tx_rings), + GFP_KERNEL); + if (!dp->tx_rings) + return -ENOMEM; + + for (r = 0; r < dp->num_tx_rings; r++) { + int bias = 0; + + if (r >= dp->num_stack_tx_rings) + bias = dp->num_stack_tx_rings; + + nfp_net_tx_ring_init(&dp->tx_rings[r], &nn->r_vecs[r - bias], + r, bias); + + if (nfp_net_tx_ring_alloc(dp, &dp->tx_rings[r])) + goto err_free_prev; + + if (nfp_net_tx_ring_bufs_alloc(dp, &dp->tx_rings[r])) + goto err_free_ring; + } + + return 0; + +err_free_prev: + while (r--) { + nfp_net_tx_ring_bufs_free(dp, &dp->tx_rings[r]); +err_free_ring: + nfp_net_tx_ring_free(dp, &dp->tx_rings[r]); + } + kfree(dp->tx_rings); + return -ENOMEM; +} + +void nfp_net_tx_rings_free(struct nfp_net_dp *dp) +{ + unsigned int r; + + for (r = 0; r < dp->num_tx_rings; r++) { + nfp_net_tx_ring_bufs_free(dp, &dp->tx_rings[r]); + nfp_net_tx_ring_free(dp, &dp->tx_rings[r]); + } + + kfree(dp->tx_rings); +} + +/** + * nfp_net_rx_ring_free() - Free resources allocated to a RX ring + * @rx_ring: RX ring to free + */ +static void nfp_net_rx_ring_free(struct nfp_net_rx_ring *rx_ring) +{ + struct nfp_net_r_vector *r_vec = rx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + + if (dp->netdev) + xdp_rxq_info_unreg(&rx_ring->xdp_rxq); + + if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) + kvfree(rx_ring->xsk_rxbufs); + else + kvfree(rx_ring->rxbufs); + + if (rx_ring->rxds) + dma_free_coherent(dp->dev, rx_ring->size, + rx_ring->rxds, rx_ring->dma); + + rx_ring->cnt = 0; + rx_ring->rxbufs = NULL; + rx_ring->xsk_rxbufs = NULL; + rx_ring->rxds = NULL; + rx_ring->dma = 0; + rx_ring->size = 0; +} + +/** + * nfp_net_rx_ring_alloc() - Allocate resource for a RX ring + * @dp: NFP Net data path struct + * @rx_ring: RX ring to allocate + * + * Return: 0 on success, negative errno otherwise. + */ +static int +nfp_net_rx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring) +{ + enum xdp_mem_type mem_type; + size_t rxbuf_sw_desc_sz; + int err; + + if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) { + mem_type = MEM_TYPE_XSK_BUFF_POOL; + rxbuf_sw_desc_sz = sizeof(*rx_ring->xsk_rxbufs); + } else { + mem_type = MEM_TYPE_PAGE_ORDER0; + rxbuf_sw_desc_sz = sizeof(*rx_ring->rxbufs); + } + + if (dp->netdev) { + err = xdp_rxq_info_reg(&rx_ring->xdp_rxq, dp->netdev, + rx_ring->idx, rx_ring->r_vec->napi.napi_id); + if (err < 0) + return err; + + err = xdp_rxq_info_reg_mem_model(&rx_ring->xdp_rxq, mem_type, NULL); + if (err) + goto err_alloc; + } + + rx_ring->cnt = dp->rxd_cnt; + rx_ring->size = array_size(rx_ring->cnt, sizeof(*rx_ring->rxds)); + rx_ring->rxds = dma_alloc_coherent(dp->dev, rx_ring->size, + &rx_ring->dma, + GFP_KERNEL | __GFP_NOWARN); + if (!rx_ring->rxds) { + netdev_warn(dp->netdev, "failed to allocate RX descriptor ring memory, requested descriptor count: %d, consider lowering descriptor count\n", + rx_ring->cnt); + goto err_alloc; + } + + if (nfp_net_has_xsk_pool_slow(dp, rx_ring->idx)) { + rx_ring->xsk_rxbufs = kvcalloc(rx_ring->cnt, rxbuf_sw_desc_sz, + GFP_KERNEL); + if (!rx_ring->xsk_rxbufs) + goto err_alloc; + } else { + rx_ring->rxbufs = kvcalloc(rx_ring->cnt, rxbuf_sw_desc_sz, + GFP_KERNEL); + if (!rx_ring->rxbufs) + goto err_alloc; + } + + return 0; + +err_alloc: + nfp_net_rx_ring_free(rx_ring); + return -ENOMEM; +} + +int nfp_net_rx_rings_prepare(struct nfp_net *nn, struct nfp_net_dp *dp) +{ + unsigned int r; + + dp->rx_rings = kcalloc(dp->num_rx_rings, sizeof(*dp->rx_rings), + GFP_KERNEL); + if (!dp->rx_rings) + return -ENOMEM; + + for (r = 0; r < dp->num_rx_rings; r++) { + nfp_net_rx_ring_init(&dp->rx_rings[r], &nn->r_vecs[r], r); + + if (nfp_net_rx_ring_alloc(dp, &dp->rx_rings[r])) + goto err_free_prev; + + if (nfp_net_rx_ring_bufs_alloc(dp, &dp->rx_rings[r])) + goto err_free_ring; + } + + return 0; + +err_free_prev: + while (r--) { + nfp_net_rx_ring_bufs_free(dp, &dp->rx_rings[r]); +err_free_ring: + nfp_net_rx_ring_free(&dp->rx_rings[r]); + } + kfree(dp->rx_rings); + return -ENOMEM; +} + +void nfp_net_rx_rings_free(struct nfp_net_dp *dp) +{ + unsigned int r; + + for (r = 0; r < dp->num_rx_rings; r++) { + nfp_net_rx_ring_bufs_free(dp, &dp->rx_rings[r]); + nfp_net_rx_ring_free(&dp->rx_rings[r]); + } + + kfree(dp->rx_rings); +} + +void +nfp_net_rx_ring_hw_cfg_write(struct nfp_net *nn, + struct nfp_net_rx_ring *rx_ring, unsigned int idx) +{ + /* Write the DMA address, size and MSI-X info to the device */ + nn_writeq(nn, NFP_NET_CFG_RXR_ADDR(idx), rx_ring->dma); + nn_writeb(nn, NFP_NET_CFG_RXR_SZ(idx), ilog2(rx_ring->cnt)); + nn_writeb(nn, NFP_NET_CFG_RXR_VEC(idx), rx_ring->r_vec->irq_entry); +} + +void +nfp_net_tx_ring_hw_cfg_write(struct nfp_net *nn, + struct nfp_net_tx_ring *tx_ring, unsigned int idx) +{ + nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(idx), tx_ring->dma); + nn_writeb(nn, NFP_NET_CFG_TXR_SZ(idx), ilog2(tx_ring->cnt)); + nn_writeb(nn, NFP_NET_CFG_TXR_VEC(idx), tx_ring->r_vec->irq_entry); +} + +void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx) +{ + nn_writeq(nn, NFP_NET_CFG_RXR_ADDR(idx), 0); + nn_writeb(nn, NFP_NET_CFG_RXR_SZ(idx), 0); + nn_writeb(nn, NFP_NET_CFG_RXR_VEC(idx), 0); + + nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(idx), 0); + nn_writeb(nn, NFP_NET_CFG_TXR_SZ(idx), 0); + nn_writeb(nn, NFP_NET_CFG_TXR_VEC(idx), 0); +} + +void +nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + nfp_nfd3_tx_ring_reset(dp, tx_ring); +} + +void nfp_net_rx_ring_fill_freelist(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring) +{ + nfp_nfd3_rx_ring_fill_freelist(dp, rx_ring); +} + +int +nfp_net_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + return nfp_nfd3_tx_ring_alloc(dp, tx_ring); +} + +void +nfp_net_tx_ring_free(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + nfp_nfd3_tx_ring_free(tx_ring); +} + +int nfp_net_tx_ring_bufs_alloc(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring) +{ + return nfp_nfd3_tx_ring_bufs_alloc(dp, tx_ring); +} + +void nfp_net_tx_ring_bufs_free(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring) +{ + nfp_nfd3_tx_ring_bufs_free(dp, tx_ring); +} + +void +nfp_net_debugfs_print_tx_descs(struct seq_file *file, + struct nfp_net_r_vector *r_vec, + struct nfp_net_tx_ring *tx_ring, + u32 d_rd_p, u32 d_wr_p) +{ + nfp_nfd3_print_tx_descs(file, r_vec, tx_ring, d_rd_p, d_wr_p); +} + +bool __nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) +{ + return __nfp_nfd3_ctrl_tx(nn, skb); +} + +bool nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) +{ + return nfp_nfd3_ctrl_tx(nn, skb); +} diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h new file mode 100644 index 000000000000..30ccdf5aa819 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h @@ -0,0 +1,120 @@ +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ +/* Copyright (C) 2019 Netronome Systems, Inc. */ + +#ifndef _NFP_NET_DP_ +#define _NFP_NET_DP_ + +#include "nfp_net.h" +#include "nfd3/nfd3.h" + +static inline dma_addr_t nfp_net_dma_map_rx(struct nfp_net_dp *dp, void *frag) +{ + return dma_map_single_attrs(dp->dev, frag + NFP_NET_RX_BUF_HEADROOM, + dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, + dp->rx_dma_dir, DMA_ATTR_SKIP_CPU_SYNC); +} + +static inline void +nfp_net_dma_sync_dev_rx(const struct nfp_net_dp *dp, dma_addr_t dma_addr) +{ + dma_sync_single_for_device(dp->dev, dma_addr, + dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, + dp->rx_dma_dir); +} + +static inline void nfp_net_dma_unmap_rx(struct nfp_net_dp *dp, + dma_addr_t dma_addr) +{ + dma_unmap_single_attrs(dp->dev, dma_addr, + dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA, + dp->rx_dma_dir, DMA_ATTR_SKIP_CPU_SYNC); +} + +static inline void nfp_net_dma_sync_cpu_rx(struct nfp_net_dp *dp, + dma_addr_t dma_addr, + unsigned int len) +{ + dma_sync_single_for_cpu(dp->dev, dma_addr - NFP_NET_RX_BUF_HEADROOM, + len, dp->rx_dma_dir); +} + +/** + * nfp_net_tx_full() - check if the TX ring is full + * @tx_ring: TX ring to check + * @dcnt: Number of descriptors that need to be enqueued (must be >= 1) + * + * This function checks, based on the *host copy* of read/write + * pointer if a given TX ring is full. The real TX queue may have + * some newly made available slots. + * + * Return: True if the ring is full. + */ +static inline int nfp_net_tx_full(struct nfp_net_tx_ring *tx_ring, int dcnt) +{ + return (tx_ring->wr_p - tx_ring->rd_p) >= (tx_ring->cnt - dcnt); +} + +static inline void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring) +{ + wmb(); /* drain writebuffer */ + nfp_qcp_wr_ptr_add(tx_ring->qcp_q, tx_ring->wr_ptr_add); + tx_ring->wr_ptr_add = 0; +} + +static inline void nfp_net_free_frag(void *frag, bool xdp) +{ + if (!xdp) + skb_free_frag(frag); + else + __free_page(virt_to_page(frag)); +} + +/** + * nfp_net_irq_unmask() - Unmask automasked interrupt + * @nn: NFP Network structure + * @entry_nr: MSI-X table entry + * + * Clear the ICR for the IRQ entry. + */ +static inline void nfp_net_irq_unmask(struct nfp_net *nn, unsigned int entry_nr) +{ + nn_writeb(nn, NFP_NET_CFG_ICR(entry_nr), NFP_NET_CFG_ICR_UNMASKED); + nn_pci_flush(nn); +} + +struct seq_file; + +/* Common */ +void +nfp_net_rx_ring_hw_cfg_write(struct nfp_net *nn, + struct nfp_net_rx_ring *rx_ring, unsigned int idx); +void +nfp_net_tx_ring_hw_cfg_write(struct nfp_net *nn, + struct nfp_net_tx_ring *tx_ring, unsigned int idx); +void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx); + +void *nfp_net_rx_alloc_one(struct nfp_net_dp *dp, dma_addr_t *dma_addr); +int nfp_net_rx_rings_prepare(struct nfp_net *nn, struct nfp_net_dp *dp); +int nfp_net_tx_rings_prepare(struct nfp_net *nn, struct nfp_net_dp *dp); +void nfp_net_rx_rings_free(struct nfp_net_dp *dp); +void nfp_net_tx_rings_free(struct nfp_net_dp *dp); +void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring); + +void +nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); +void nfp_net_rx_ring_fill_freelist(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring); +int +nfp_net_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); +void +nfp_net_tx_ring_free(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); +int nfp_net_tx_ring_bufs_alloc(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring); +void nfp_net_tx_ring_bufs_free(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring); +void +nfp_net_debugfs_print_tx_descs(struct seq_file *file, + struct nfp_net_r_vector *r_vec, + struct nfp_net_tx_ring *tx_ring, + u32 d_rd_p, u32 d_wr_p); +#endif /* _NFP_NET_DP_ */ diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.c b/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.c index ab7243277efa..50a59aad70f4 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.c @@ -10,204 +10,9 @@ #include "nfp_app.h" #include "nfp_net.h" +#include "nfp_net_dp.h" #include "nfp_net_xsk.h" -static int nfp_net_tx_space(struct nfp_net_tx_ring *tx_ring) -{ - return tx_ring->cnt - tx_ring->wr_p + tx_ring->rd_p - 1; -} - -static void nfp_net_xsk_tx_free(struct nfp_net_tx_buf *txbuf) -{ - xsk_buff_free(txbuf->xdp); - - txbuf->dma_addr = 0; - txbuf->xdp = NULL; -} - -void nfp_net_xsk_tx_bufs_free(struct nfp_net_tx_ring *tx_ring) -{ - struct nfp_net_tx_buf *txbuf; - unsigned int idx; - - while (tx_ring->rd_p != tx_ring->wr_p) { - idx = D_IDX(tx_ring, tx_ring->rd_p); - txbuf = &tx_ring->txbufs[idx]; - - txbuf->real_len = 0; - - tx_ring->qcp_rd_p++; - tx_ring->rd_p++; - - if (tx_ring->r_vec->xsk_pool) { - if (txbuf->is_xsk_tx) - nfp_net_xsk_tx_free(txbuf); - - xsk_tx_completed(tx_ring->r_vec->xsk_pool, 1); - } - } -} - -static bool nfp_net_xsk_complete(struct nfp_net_tx_ring *tx_ring) -{ - struct nfp_net_r_vector *r_vec = tx_ring->r_vec; - u32 done_pkts = 0, done_bytes = 0, reused = 0; - bool done_all; - int idx, todo; - u32 qcp_rd_p; - - if (tx_ring->wr_p == tx_ring->rd_p) - return true; - - /* Work out how many descriptors have been transmitted. */ - qcp_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); - - if (qcp_rd_p == tx_ring->qcp_rd_p) - return true; - - todo = D_IDX(tx_ring, qcp_rd_p - tx_ring->qcp_rd_p); - - done_all = todo <= NFP_NET_XDP_MAX_COMPLETE; - todo = min(todo, NFP_NET_XDP_MAX_COMPLETE); - - tx_ring->qcp_rd_p = D_IDX(tx_ring, tx_ring->qcp_rd_p + todo); - - done_pkts = todo; - while (todo--) { - struct nfp_net_tx_buf *txbuf; - - idx = D_IDX(tx_ring, tx_ring->rd_p); - tx_ring->rd_p++; - - txbuf = &tx_ring->txbufs[idx]; - if (unlikely(!txbuf->real_len)) - continue; - - done_bytes += txbuf->real_len; - txbuf->real_len = 0; - - if (txbuf->is_xsk_tx) { - nfp_net_xsk_tx_free(txbuf); - reused++; - } - } - - u64_stats_update_begin(&r_vec->tx_sync); - r_vec->tx_bytes += done_bytes; - r_vec->tx_pkts += done_pkts; - u64_stats_update_end(&r_vec->tx_sync); - - xsk_tx_completed(r_vec->xsk_pool, done_pkts - reused); - - WARN_ONCE(tx_ring->wr_p - tx_ring->rd_p > tx_ring->cnt, - "XDP TX ring corruption rd_p=%u wr_p=%u cnt=%u\n", - tx_ring->rd_p, tx_ring->wr_p, tx_ring->cnt); - - return done_all; -} - -static void nfp_net_xsk_tx(struct nfp_net_tx_ring *tx_ring) -{ - struct nfp_net_r_vector *r_vec = tx_ring->r_vec; - struct xdp_desc desc[NFP_NET_XSK_TX_BATCH]; - struct xsk_buff_pool *xsk_pool; - struct nfp_net_tx_desc *txd; - u32 pkts = 0, wr_idx; - u32 i, got; - - xsk_pool = r_vec->xsk_pool; - - while (nfp_net_tx_space(tx_ring) >= NFP_NET_XSK_TX_BATCH) { - for (i = 0; i < NFP_NET_XSK_TX_BATCH; i++) - if (!xsk_tx_peek_desc(xsk_pool, &desc[i])) - break; - got = i; - if (!got) - break; - - wr_idx = D_IDX(tx_ring, tx_ring->wr_p + i); - prefetchw(&tx_ring->txds[wr_idx]); - - for (i = 0; i < got; i++) - xsk_buff_raw_dma_sync_for_device(xsk_pool, desc[i].addr, - desc[i].len); - - for (i = 0; i < got; i++) { - wr_idx = D_IDX(tx_ring, tx_ring->wr_p + i); - - tx_ring->txbufs[wr_idx].real_len = desc[i].len; - tx_ring->txbufs[wr_idx].is_xsk_tx = false; - - /* Build TX descriptor. */ - txd = &tx_ring->txds[wr_idx]; - nfp_desc_set_dma_addr(txd, - xsk_buff_raw_get_dma(xsk_pool, - desc[i].addr - )); - txd->offset_eop = PCIE_DESC_TX_EOP; - txd->dma_len = cpu_to_le16(desc[i].len); - txd->data_len = cpu_to_le16(desc[i].len); - } - - tx_ring->wr_p += got; - pkts += got; - } - - if (!pkts) - return; - - xsk_tx_release(xsk_pool); - /* Ensure all records are visible before incrementing write counter. */ - wmb(); - nfp_qcp_wr_ptr_add(tx_ring->qcp_q, pkts); -} - -static bool -nfp_net_xsk_tx_xdp(const struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, - struct nfp_net_rx_ring *rx_ring, - struct nfp_net_tx_ring *tx_ring, - struct nfp_net_xsk_rx_buf *xrxbuf, unsigned int pkt_len, - int pkt_off) -{ - struct xsk_buff_pool *pool = r_vec->xsk_pool; - struct nfp_net_tx_buf *txbuf; - struct nfp_net_tx_desc *txd; - unsigned int wr_idx; - - if (nfp_net_tx_space(tx_ring) < 1) - return false; - - xsk_buff_raw_dma_sync_for_device(pool, xrxbuf->dma_addr + pkt_off, pkt_len); - - wr_idx = D_IDX(tx_ring, tx_ring->wr_p); - - txbuf = &tx_ring->txbufs[wr_idx]; - txbuf->xdp = xrxbuf->xdp; - txbuf->real_len = pkt_len; - txbuf->is_xsk_tx = true; - - /* Build TX descriptor */ - txd = &tx_ring->txds[wr_idx]; - txd->offset_eop = PCIE_DESC_TX_EOP; - txd->dma_len = cpu_to_le16(pkt_len); - nfp_desc_set_dma_addr(txd, xrxbuf->dma_addr + pkt_off); - txd->data_len = cpu_to_le16(pkt_len); - - txd->flags = 0; - txd->mss = 0; - txd->lso_hdrlen = 0; - - tx_ring->wr_ptr_add++; - tx_ring->wr_p++; - - return true; -} - -static int nfp_net_rx_space(struct nfp_net_rx_ring *rx_ring) -{ - return rx_ring->cnt - rx_ring->wr_p + rx_ring->rd_p - 1; -} - static void nfp_net_xsk_rx_bufs_stash(struct nfp_net_rx_ring *rx_ring, unsigned int idx, struct xdp_buff *xdp) @@ -224,13 +29,13 @@ nfp_net_xsk_rx_bufs_stash(struct nfp_net_rx_ring *rx_ring, unsigned int idx, xsk_buff_xdp_get_frame_dma(xdp) + headroom; } -static void nfp_net_xsk_rx_unstash(struct nfp_net_xsk_rx_buf *rxbuf) +void nfp_net_xsk_rx_unstash(struct nfp_net_xsk_rx_buf *rxbuf) { rxbuf->dma_addr = 0; rxbuf->xdp = NULL; } -static void nfp_net_xsk_rx_free(struct nfp_net_xsk_rx_buf *rxbuf) +void nfp_net_xsk_rx_free(struct nfp_net_xsk_rx_buf *rxbuf) { if (rxbuf->xdp) xsk_buff_free(rxbuf->xdp); @@ -277,8 +82,8 @@ void nfp_net_xsk_rx_ring_fill_freelist(struct nfp_net_rx_ring *rx_ring) nfp_qcp_wr_ptr_add(rx_ring->qcp_fl, wr_ptr_add); } -static void nfp_net_xsk_rx_drop(struct nfp_net_r_vector *r_vec, - struct nfp_net_xsk_rx_buf *xrxbuf) +void nfp_net_xsk_rx_drop(struct nfp_net_r_vector *r_vec, + struct nfp_net_xsk_rx_buf *xrxbuf) { u64_stats_update_begin(&r_vec->rx_sync); r_vec->rx_drops++; @@ -287,213 +92,6 @@ static void nfp_net_xsk_rx_drop(struct nfp_net_r_vector *r_vec, nfp_net_xsk_rx_free(xrxbuf); } -static void nfp_net_xsk_rx_skb(struct nfp_net_rx_ring *rx_ring, - const struct nfp_net_rx_desc *rxd, - struct nfp_net_xsk_rx_buf *xrxbuf, - const struct nfp_meta_parsed *meta, - unsigned int pkt_len, - bool meta_xdp, - unsigned int *skbs_polled) -{ - struct nfp_net_r_vector *r_vec = rx_ring->r_vec; - struct nfp_net_dp *dp = &r_vec->nfp_net->dp; - struct net_device *netdev; - struct sk_buff *skb; - - if (likely(!meta->portid)) { - netdev = dp->netdev; - } else { - struct nfp_net *nn = netdev_priv(dp->netdev); - - netdev = nfp_app_dev_get(nn->app, meta->portid, NULL); - if (unlikely(!netdev)) { - nfp_net_xsk_rx_drop(r_vec, xrxbuf); - return; - } - nfp_repr_inc_rx_stats(netdev, pkt_len); - } - - skb = napi_alloc_skb(&r_vec->napi, pkt_len); - if (!skb) { - nfp_net_xsk_rx_drop(r_vec, xrxbuf); - return; - } - memcpy(skb_put(skb, pkt_len), xrxbuf->xdp->data, pkt_len); - - skb->mark = meta->mark; - skb_set_hash(skb, meta->hash, meta->hash_type); - - skb_record_rx_queue(skb, rx_ring->idx); - skb->protocol = eth_type_trans(skb, netdev); - - nfp_net_rx_csum(dp, r_vec, rxd, meta, skb); - - if (rxd->rxd.flags & PCIE_DESC_RX_VLAN) - __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), - le16_to_cpu(rxd->rxd.vlan)); - if (meta_xdp) - skb_metadata_set(skb, - xrxbuf->xdp->data - xrxbuf->xdp->data_meta); - - napi_gro_receive(&rx_ring->r_vec->napi, skb); - - nfp_net_xsk_rx_free(xrxbuf); - - (*skbs_polled)++; -} - -static unsigned int -nfp_net_xsk_rx(struct nfp_net_rx_ring *rx_ring, int budget, - unsigned int *skbs_polled) -{ - struct nfp_net_r_vector *r_vec = rx_ring->r_vec; - struct nfp_net_dp *dp = &r_vec->nfp_net->dp; - struct nfp_net_tx_ring *tx_ring; - struct bpf_prog *xdp_prog; - bool xdp_redir = false; - int pkts_polled = 0; - - xdp_prog = READ_ONCE(dp->xdp_prog); - tx_ring = r_vec->xdp_ring; - - while (pkts_polled < budget) { - unsigned int meta_len, data_len, pkt_len, pkt_off; - struct nfp_net_xsk_rx_buf *xrxbuf; - struct nfp_net_rx_desc *rxd; - struct nfp_meta_parsed meta; - int idx, act; - - idx = D_IDX(rx_ring, rx_ring->rd_p); - - rxd = &rx_ring->rxds[idx]; - if (!(rxd->rxd.meta_len_dd & PCIE_DESC_RX_DD)) - break; - - rx_ring->rd_p++; - pkts_polled++; - - xrxbuf = &rx_ring->xsk_rxbufs[idx]; - - /* If starved of buffers "drop" it and scream. */ - if (rx_ring->rd_p >= rx_ring->wr_p) { - nn_dp_warn(dp, "Starved of RX buffers\n"); - nfp_net_xsk_rx_drop(r_vec, xrxbuf); - break; - } - - /* Memory barrier to ensure that we won't do other reads - * before the DD bit. - */ - dma_rmb(); - - memset(&meta, 0, sizeof(meta)); - - /* Only supporting AF_XDP with dynamic metadata so buffer layout - * is always: - * - * --------------------------------------------------------- - * | off | metadata | packet | XXXX | - * --------------------------------------------------------- - */ - meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK; - data_len = le16_to_cpu(rxd->rxd.data_len); - pkt_len = data_len - meta_len; - - if (unlikely(meta_len > NFP_NET_MAX_PREPEND)) { - nn_dp_warn(dp, "Oversized RX packet metadata %u\n", - meta_len); - nfp_net_xsk_rx_drop(r_vec, xrxbuf); - continue; - } - - /* Stats update. */ - u64_stats_update_begin(&r_vec->rx_sync); - r_vec->rx_pkts++; - r_vec->rx_bytes += pkt_len; - u64_stats_update_end(&r_vec->rx_sync); - - xrxbuf->xdp->data += meta_len; - xrxbuf->xdp->data_end = xrxbuf->xdp->data + pkt_len; - xdp_set_data_meta_invalid(xrxbuf->xdp); - xsk_buff_dma_sync_for_cpu(xrxbuf->xdp, r_vec->xsk_pool); - net_prefetch(xrxbuf->xdp->data); - - if (meta_len) { - if (unlikely(nfp_net_parse_meta(dp->netdev, &meta, - xrxbuf->xdp->data - - meta_len, - xrxbuf->xdp->data, - pkt_len, meta_len))) { - nn_dp_warn(dp, "Invalid RX packet metadata\n"); - nfp_net_xsk_rx_drop(r_vec, xrxbuf); - continue; - } - - if (unlikely(meta.portid)) { - struct nfp_net *nn = netdev_priv(dp->netdev); - - if (meta.portid != NFP_META_PORT_ID_CTRL) { - nfp_net_xsk_rx_skb(rx_ring, rxd, xrxbuf, - &meta, pkt_len, - false, skbs_polled); - continue; - } - - nfp_app_ctrl_rx_raw(nn->app, xrxbuf->xdp->data, - pkt_len); - nfp_net_xsk_rx_free(xrxbuf); - continue; - } - } - - act = bpf_prog_run_xdp(xdp_prog, xrxbuf->xdp); - - pkt_len = xrxbuf->xdp->data_end - xrxbuf->xdp->data; - pkt_off = xrxbuf->xdp->data - xrxbuf->xdp->data_hard_start; - - switch (act) { - case XDP_PASS: - nfp_net_xsk_rx_skb(rx_ring, rxd, xrxbuf, &meta, pkt_len, - true, skbs_polled); - break; - case XDP_TX: - if (!nfp_net_xsk_tx_xdp(dp, r_vec, rx_ring, tx_ring, - xrxbuf, pkt_len, pkt_off)) - nfp_net_xsk_rx_drop(r_vec, xrxbuf); - else - nfp_net_xsk_rx_unstash(xrxbuf); - break; - case XDP_REDIRECT: - if (xdp_do_redirect(dp->netdev, xrxbuf->xdp, xdp_prog)) { - nfp_net_xsk_rx_drop(r_vec, xrxbuf); - } else { - nfp_net_xsk_rx_unstash(xrxbuf); - xdp_redir = true; - } - break; - default: - bpf_warn_invalid_xdp_action(dp->netdev, xdp_prog, act); - fallthrough; - case XDP_ABORTED: - trace_xdp_exception(dp->netdev, xdp_prog, act); - fallthrough; - case XDP_DROP: - nfp_net_xsk_rx_drop(r_vec, xrxbuf); - break; - } - } - - nfp_net_xsk_rx_ring_fill_freelist(r_vec->rx_ring); - - if (xdp_redir) - xdp_do_flush_map(); - - if (tx_ring->wr_ptr_add) - nfp_net_tx_xmit_more_flush(tx_ring); - - return pkts_polled; -} - static void nfp_net_xsk_pool_unmap(struct device *dev, struct xsk_buff_pool *pool) { @@ -566,27 +164,3 @@ int nfp_net_xsk_wakeup(struct net_device *netdev, u32 queue_id, u32 flags) return 0; } - -int nfp_net_xsk_poll(struct napi_struct *napi, int budget) -{ - struct nfp_net_r_vector *r_vec = - container_of(napi, struct nfp_net_r_vector, napi); - unsigned int pkts_polled, skbs = 0; - - pkts_polled = nfp_net_xsk_rx(r_vec->rx_ring, budget, &skbs); - - if (pkts_polled < budget) { - if (r_vec->tx_ring) - nfp_net_tx_complete(r_vec->tx_ring, budget); - - if (!nfp_net_xsk_complete(r_vec->xdp_ring)) - pkts_polled = budget; - - nfp_net_xsk_tx(r_vec->xdp_ring); - - if (pkts_polled < budget && napi_complete_done(napi, skbs)) - nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry); - } - - return pkts_polled; -} diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.h b/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.h index 5c8549cb3543..6d281eb2fc1c 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.h @@ -15,15 +15,27 @@ static inline bool nfp_net_has_xsk_pool_slow(struct nfp_net_dp *dp, return dp->xdp_prog && dp->xsk_pools[qid]; } +static inline int nfp_net_rx_space(struct nfp_net_rx_ring *rx_ring) +{ + return rx_ring->cnt - rx_ring->wr_p + rx_ring->rd_p - 1; +} + +static inline int nfp_net_tx_space(struct nfp_net_tx_ring *tx_ring) +{ + return tx_ring->cnt - tx_ring->wr_p + tx_ring->rd_p - 1; +} + +void nfp_net_xsk_rx_unstash(struct nfp_net_xsk_rx_buf *rxbuf); +void nfp_net_xsk_rx_free(struct nfp_net_xsk_rx_buf *rxbuf); +void nfp_net_xsk_rx_drop(struct nfp_net_r_vector *r_vec, + struct nfp_net_xsk_rx_buf *xrxbuf); int nfp_net_xsk_setup_pool(struct net_device *netdev, struct xsk_buff_pool *pool, u16 queue_id); -void nfp_net_xsk_tx_bufs_free(struct nfp_net_tx_ring *tx_ring); void nfp_net_xsk_rx_bufs_free(struct nfp_net_rx_ring *rx_ring); void nfp_net_xsk_rx_ring_fill_freelist(struct nfp_net_rx_ring *rx_ring); int nfp_net_xsk_wakeup(struct net_device *netdev, u32 queue_id, u32 flags); -int nfp_net_xsk_poll(struct napi_struct *napi, int budget); #endif /* _NFP_XSK_H_ */ From patchwork Mon Mar 21 10:42:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787076 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08884C433F5 for ; Mon, 21 Mar 2022 10:42:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346433AbiCUKoC (ORCPT ); Mon, 21 Mar 2022 06:44:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34344 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346428AbiCUKn6 (ORCPT ); Mon, 21 Mar 2022 06:43:58 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1AA4E55230 for ; Mon, 21 Mar 2022 03:42:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bS8/H6y6e+KPKQUxypIiISCg60FwMcZ16LYKh41lHDLUqiTFTipy7B9E1juA/jp4hhh26oNij/zAG10Dy00V+FEAleq4yIIqR2fB1vTgLYlX3CcF1jKTDf/LuYhhW/ygLpkjKOpuIfka/hdpDWoqqa89VTwetagN8UVGII1qS4UCjDtauXXdQEq+D7CTjZqovRQLpquWkHk3iMKYpJ8vnwt5SaVi/dLvmQeD/vwoGq9hVnexxjys5PosMfbBTAN7n+0U9w+TnNcQf2W9bJQWz4nDbCu2Qh4T4vKv718DbVMdXlXKKoz6d33YP2ByEE+wtTmbSoB5B5YlJSaf7SnY3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qRwIsxME4yECKGc6XtiOwqUCMk58u83rwT8iL1OC/rY=; b=OgaK719xz0+l4emryoxAqo7r3dXEoRgTQrx+ludpMwAGk+OybbDxWVDo2aEe1nbmzk3SnhZMvmwRm8aeq98s02g1xi6hI+/SZn8IlX3Vj55Zvl2unqc+55cxNAkN1b+q/ey7x2Oq/cw70OUtYZehOHroXidYvuG3ELbpl/9a8RXnzLMZvibMQzCC52kjC4e9iVNplHOmVSAuh31xoG/vIW5eqLYy81JmHMeii/IGdIbQmJMt39aL5sBiXkZlvby62DGPlR8r0Ngdsc2kK0qbCkXj4sePf7vfilh9/XOj5iGWy1b/tFFAkn4706yqTpqQpSd8HAvKURxnXFEUOr8k4w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qRwIsxME4yECKGc6XtiOwqUCMk58u83rwT8iL1OC/rY=; b=lqORtbejmSkraAbE2on2lQBrkxfXP8l6JzQtiT+HlvFQS6klfs370X59pasm5YV370eGoXVvhgzDTS2OeLEfW8Cpq1RvxTzN3CgA0sR58OfB5GNmal41PeUKta2eXMTm1De3z4hkwDsAydYd/sZvYIBdMrsZqh0i0k+ydkVOnO4= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:30 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:30 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 03/10] nfp: use callbacks for slow path ring related functions Date: Mon, 21 Mar 2022 11:42:02 +0100 Message-Id: <20220321104209.273535-4-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 57f71366-a4ef-423e-9c45-08da0b277fb7 X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IRkME6OtuCXCF6f9snrf/zSdJw/7OlIAoSXOiOTMhHRl9sAMyWgeEQ1NYokYwVyeFnm7qWUTs/b+x2+iz7HtS93CLv18LgWiF9bApp2Q9BDx5RSyiVtGNvyfMDkmTLaJ6RRahVOEYqqBnldqE9rgpfyNzs58Fy06un38xMQUIgZ6SDOyqJ/+HeENlaMhUsl2AQQmbBmARlgX3ue3IuI6NDlBqbUT9VRr1uxWL/cQYtNDujOdWeYvGss13jNRMxTysqmlaKHZt6VfeGCCSx3R0VUGQ/ajpgFeAwS6NbVQyQKAPOuU3qihQBF0k/B2d4biVYn11TkIQp6ZhR9jkXDpPGABtLV30OMZm6PQ6JTtP8lcrQAs1Ru9IJjR4LWZonpiWfuvqVAWKNFszDG1e+zZedhaM4mDbLr7AIVG3sNNoGuYDYTgItIp/9z3w4c1YqKbjc+EQhSX8A3N3sfK8im4om+MLR9JeRXpjrKbSElP4VXG1NlAEPEKQ9RXWR5I8SE7lWiWtHyJmJRJkcCBvo5CzDE1jHP85nWouUQrbUwtubqH2jPf2iJP030BRmwRijIsY/Xw17xGZ804uf0O6hFUBwWGnjCU96V9OFUaADrrxekM/OL+T81XMdacFU6uGtLUiW3pCHjt8I2fwIzjFWwTcA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(30864003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(83380400001)(508600001)(36756003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Bq9ghuJdzopk0ut0/CQFFNVrWw+TwVEzpHpSlZYiMURPQxcOfPSTWySNpCXw7hcmm9X2GnfDqOnRYFdORe77bUOvZkneu1AL06LQT0oH/h6kBqd6eFYWlOzBs2h6RR7bUrKCMZvRVBk8gCxEy34BtoVJW2tyAkS8P98HPiRadfe0XVRc4YwFNxioSgcdHekehawcVN6/eaLlAKMliW2jGiTNVGrBmnNWEwriJOi6xuccr3YINJUEcZDZkh+2zXt3KRhLSqsTmMxpt4ZKBmAZ4fcZoiKiIz0tv4azTy3mI0M+rQRkZ7BL5NZIXMBwdwjGWDiZADYp+4eObK3EPEtsYEzhXFoxlrdVectkKWXtWdwp6vrXdEROqwLfGTW0zNPXlHFGAzLyTVgpVpkHJ6b/xCTyxidLNjBHWlU+4vXOLH1nJm4Zpmn34Hu+7o3WZPamtqZ1f++tAnBvSNExlEkbqbFOunaXqUg+D3XQFxYB7g1FzROqlUt+GAYqgiha+c6t8lsnBcywRpnWch9mC7+O54l60ZKbkgbZEPaPI5f5OneuMr4GraEa7pofWsh7uWaH60x3w82qpi1gj05UNM39UuwBik+ra8i3UH2GWGI0WsyWz+XXJpRozi/6DRjLmmq1jPcJndryY6fbjwpCfeQzH+grWYyM3EuP31sCq2ZitvpNjxCBr+8CwHfGH423iGHsKiu9AVFyui3TgVjzpIhCl4Ml2nnAvbeTszXhN9KgcrLCzHul4SDJT+3Bk1fI7KEAFNn+JotuDbLJj1E3cnhKVS++I2e5IQhsFISIqaFH37prPnxXWW3wGLHSmO1Acjw5gf949s8JVAG6D6vnPcOOzD8GXF2cIKaFMiZ5IMTaY0M4FO6Hhwm/KiOLniF0atIeizurHMO3BMeHmgo/4HysSqsKBZ9TwVSnIUrXZKocQiHp9LtJSqBh7bX/LZP1rMxP7y1+DyyAkwrhE+RzoGqDUzcpnMduC1iVBoGHCG84zHQbpCV5Tywrfe8A9mUNWNmL9lMXEewFNxep46qepMILY8ZEnGgaUMmFbjHpInzdlGPaQFpQIOQCJ578AnWCm+0wdiKLurVO/k75afiZpGm6BSBZGpLJC6V6MqbwnBvX4uNHdELYs8j3JODX1xgMl77t3lLa32j7Ig1IAMhbLAU6hUD097VbibfqmKVbnlWSwtsfCnXobUC4BW7t8oyselbBdaLPaLSxgo3Zcn9xIQiIHh2P90x5XwFM3785FHgkOwKlmfDv7lMdxVBJaoMzbsPEBryoLZR5TtWZAIX6Ao2mhqxikJp+ot7W51pcSQuPbvvgExUga9a3lrLaMnj9bVH/d6eQfjov6/CA4eJNShekuJnPJjThqbBFnMcIEFHUzYqMvMrJ+hT8nZ4HkW3Y00gM3RsLBFcHZDRYPJeA8d9jUgQvurm0fkDf30eqVtCbkYXuyBIC0jAOYnz/Cn8CL8bBjYt4olOtHZYw2j6EY6jDsxID6TwudxQDAK/gn89C06qibBxDa2cz5/mgjI4/xr0b7uivqggOcNfiFvW+rE3JNw== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 57f71366-a4ef-423e-9c45-08da0b277fb7 X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:30.2862 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: bsHCsxXF1jTJ1HmX34kbqVZuShMc2OZWpkHJDMku90zbVYv04far5AHeJ4J9ILWHNsNt/HWrMs5xMRltiVrjTUP3CWF4NlmkeKXTfdomjys= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski To reduce the coupling of slow path ring implementations and their callers, use callbacks instead. Changes to Jakub's work: * Also use callbacks for xmit functions Signed-off-by: Jakub Kicinski Signed-off-by: Yinjun Zhang Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/nfd3/dp.c | 27 +---- .../net/ethernet/netronome/nfp/nfd3/nfd3.h | 28 +---- .../net/ethernet/netronome/nfp/nfd3/rings.c | 28 ++++- drivers/net/ethernet/netronome/nfp/nfp_net.h | 4 + .../ethernet/netronome/nfp/nfp_net_common.c | 7 +- .../ethernet/netronome/nfp/nfp_net_debugfs.c | 2 +- .../net/ethernet/netronome/nfp/nfp_net_dp.c | 55 ++------- .../net/ethernet/netronome/nfp/nfp_net_dp.h | 111 +++++++++++++++--- 8 files changed, 148 insertions(+), 114 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c index b2a34a09471b..619f4d09e4e0 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c @@ -1131,9 +1131,9 @@ int nfp_nfd3_poll(struct napi_struct *napi, int budget) /* Control device data path */ -static bool -nfp_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, - struct sk_buff *skb, bool old) +bool +nfp_nfd3_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, + struct sk_buff *skb, bool old) { unsigned int real_len = skb->len, meta_len = 0; struct nfp_net_tx_ring *tx_ring; @@ -1215,31 +1215,12 @@ nfp_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, return false; } -bool __nfp_nfd3_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) -{ - struct nfp_net_r_vector *r_vec = &nn->r_vecs[0]; - - return nfp_ctrl_tx_one(nn, r_vec, skb, false); -} - -bool nfp_nfd3_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) -{ - struct nfp_net_r_vector *r_vec = &nn->r_vecs[0]; - bool ret; - - spin_lock_bh(&r_vec->lock); - ret = nfp_ctrl_tx_one(nn, r_vec, skb, false); - spin_unlock_bh(&r_vec->lock); - - return ret; -} - static void __nfp_ctrl_tx_queued(struct nfp_net_r_vector *r_vec) { struct sk_buff *skb; while ((skb = __skb_dequeue(&r_vec->queue))) - if (nfp_ctrl_tx_one(r_vec->nfp_net, r_vec, skb, true)) + if (nfp_nfd3_ctrl_tx_one(r_vec->nfp_net, r_vec, skb, true)) return; } diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h b/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h index 0bd597ad6c6e..7a0df9e6c3c4 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h +++ b/drivers/net/ethernet/netronome/nfp/nfd3/nfd3.h @@ -93,34 +93,14 @@ nfp_nfd3_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta, void *data, void *pkt, unsigned int pkt_len, int meta_len); void nfp_nfd3_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget); int nfp_nfd3_poll(struct napi_struct *napi, int budget); +netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev); +bool +nfp_nfd3_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, + struct sk_buff *skb, bool old); void nfp_nfd3_ctrl_poll(struct tasklet_struct *t); void nfp_nfd3_rx_ring_fill_freelist(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring); void nfp_nfd3_xsk_tx_free(struct nfp_nfd3_tx_buf *txbuf); int nfp_nfd3_xsk_poll(struct napi_struct *napi, int budget); -void -nfp_nfd3_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); -void -nfp_nfd3_rx_ring_fill_freelist(struct nfp_net_dp *dp, - struct nfp_net_rx_ring *rx_ring); -int -nfp_nfd3_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); -void -nfp_nfd3_tx_ring_free(struct nfp_net_tx_ring *tx_ring); -int -nfp_nfd3_tx_ring_bufs_alloc(struct nfp_net_dp *dp, - struct nfp_net_tx_ring *tx_ring); -void -nfp_nfd3_tx_ring_bufs_free(struct nfp_net_dp *dp, - struct nfp_net_tx_ring *tx_ring); -void -nfp_nfd3_print_tx_descs(struct seq_file *file, - struct nfp_net_r_vector *r_vec, - struct nfp_net_tx_ring *tx_ring, - u32 d_rd_p, u32 d_wr_p); -netdev_tx_t nfp_nfd3_tx(struct sk_buff *skb, struct net_device *netdev); -bool nfp_nfd3_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb); -bool __nfp_nfd3_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb); - #endif diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/rings.c b/drivers/net/ethernet/netronome/nfp/nfd3/rings.c index 4c6aaebb7522..342871d23e15 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/rings.c +++ b/drivers/net/ethernet/netronome/nfp/nfd3/rings.c @@ -38,7 +38,7 @@ static void nfp_nfd3_xsk_tx_bufs_free(struct nfp_net_tx_ring *tx_ring) * * Assumes that the device is stopped, must be idempotent. */ -void +static void nfp_nfd3_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) { struct netdev_queue *nd_q; @@ -98,7 +98,7 @@ nfp_nfd3_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) * nfp_nfd3_tx_ring_free() - Free resources allocated to a TX ring * @tx_ring: TX ring to free */ -void nfp_nfd3_tx_ring_free(struct nfp_net_tx_ring *tx_ring) +static void nfp_nfd3_tx_ring_free(struct nfp_net_tx_ring *tx_ring) { struct nfp_net_r_vector *r_vec = tx_ring->r_vec; struct nfp_net_dp *dp = &r_vec->nfp_net->dp; @@ -123,7 +123,7 @@ void nfp_nfd3_tx_ring_free(struct nfp_net_tx_ring *tx_ring) * * Return: 0 on success, negative errno otherwise. */ -int +static int nfp_nfd3_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) { struct nfp_net_r_vector *r_vec = tx_ring->r_vec; @@ -156,7 +156,7 @@ nfp_nfd3_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) return -ENOMEM; } -void +static void nfp_nfd3_tx_ring_bufs_free(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) { @@ -174,7 +174,7 @@ nfp_nfd3_tx_ring_bufs_free(struct nfp_net_dp *dp, } } -int +static int nfp_nfd3_tx_ring_bufs_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) { @@ -195,7 +195,7 @@ nfp_nfd3_tx_ring_bufs_alloc(struct nfp_net_dp *dp, return 0; } -void +static void nfp_nfd3_print_tx_descs(struct seq_file *file, struct nfp_net_r_vector *r_vec, struct nfp_net_tx_ring *tx_ring, @@ -241,3 +241,19 @@ nfp_nfd3_print_tx_descs(struct seq_file *file, seq_putc(file, '\n'); } } + +const struct nfp_dp_ops nfp_nfd3_ops = { + .version = NFP_NFD_VER_NFD3, + .poll = nfp_nfd3_poll, + .xsk_poll = nfp_nfd3_xsk_poll, + .ctrl_poll = nfp_nfd3_ctrl_poll, + .xmit = nfp_nfd3_tx, + .ctrl_tx_one = nfp_nfd3_ctrl_tx_one, + .rx_ring_fill_freelist = nfp_nfd3_rx_ring_fill_freelist, + .tx_ring_alloc = nfp_nfd3_tx_ring_alloc, + .tx_ring_reset = nfp_nfd3_tx_ring_reset, + .tx_ring_free = nfp_nfd3_tx_ring_free, + .tx_ring_bufs_alloc = nfp_nfd3_tx_ring_bufs_alloc, + .tx_ring_bufs_free = nfp_nfd3_tx_ring_bufs_free, + .print_tx_descs = nfp_nfd3_print_tx_descs +}; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index bffe53f4c2d3..13a9e6731d0d 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -98,6 +98,7 @@ /* Forward declarations */ struct nfp_cpp; struct nfp_dev_info; +struct nfp_dp_ops; struct nfp_eth_table_port; struct nfp_net; struct nfp_net_r_vector; @@ -439,6 +440,7 @@ struct nfp_stat_pair { * @rx_rings: Array of pre-allocated RX ring structures * @ctrl_bar: Pointer to mapped control BAR * + * @ops: Callbacks and parameters for this vNIC's NFD version * @txd_cnt: Size of the TX ring in number of descriptors * @rxd_cnt: Size of the RX ring in number of descriptors * @num_r_vecs: Number of used ring vectors @@ -473,6 +475,8 @@ struct nfp_net_dp { /* Cold data follows */ + const struct nfp_dp_ops *ops; + unsigned int txd_cnt; unsigned int rxd_cnt; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 1d3277068301..dd234f5228f1 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -754,7 +754,7 @@ static void nfp_net_vecs_init(struct nfp_net *nn) __skb_queue_head_init(&r_vec->queue); spin_lock_init(&r_vec->lock); - tasklet_setup(&r_vec->tasklet, nfp_nfd3_ctrl_poll); + tasklet_setup(&r_vec->tasklet, nn->dp.ops->ctrl_poll); tasklet_disable(&r_vec->tasklet); } @@ -768,7 +768,7 @@ nfp_net_napi_add(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, int idx) if (dp->netdev) netif_napi_add(dp->netdev, &r_vec->napi, nfp_net_has_xsk_pool_slow(dp, idx) ? - nfp_nfd3_xsk_poll : nfp_nfd3_poll, + dp->ops->xsk_poll : dp->ops->poll, NAPI_POLL_WEIGHT); else tasklet_enable(&r_vec->tasklet); @@ -1895,7 +1895,7 @@ const struct net_device_ops nfp_net_netdev_ops = { .ndo_uninit = nfp_app_ndo_uninit, .ndo_open = nfp_net_netdev_open, .ndo_stop = nfp_net_netdev_close, - .ndo_start_xmit = nfp_nfd3_tx, + .ndo_start_xmit = nfp_net_tx, .ndo_get_stats64 = nfp_net_stat64, .ndo_vlan_rx_add_vid = nfp_net_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = nfp_net_vlan_rx_kill_vid, @@ -2033,6 +2033,7 @@ nfp_net_alloc(struct pci_dev *pdev, const struct nfp_dev_info *dev_info, nn->dp.ctrl_bar = ctrl_bar; nn->dev_info = dev_info; nn->pdev = pdev; + nn->dp.ops = &nfp_nfd3_ops; nn->max_tx_rings = max_tx_rings; nn->max_rx_rings = max_rx_rings; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c index 59b852e18758..791203d07ac7 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c @@ -105,7 +105,7 @@ static int nfp_tx_q_show(struct seq_file *file, void *data) tx_ring->cnt, &tx_ring->dma, tx_ring->txds, tx_ring->rd_p, tx_ring->wr_p, d_rd_p, d_wr_p); - nfp_net_debugfs_print_tx_descs(file, r_vec, tx_ring, + nfp_net_debugfs_print_tx_descs(file, &nn->dp, r_vec, tx_ring, d_rd_p, d_wr_p); out: rtnl_unlock(); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c index 8fe48569a612..431bd2c13221 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c @@ -392,57 +392,28 @@ void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx) nn_writeb(nn, NFP_NET_CFG_TXR_VEC(idx), 0); } -void -nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) -{ - nfp_nfd3_tx_ring_reset(dp, tx_ring); -} - -void nfp_net_rx_ring_fill_freelist(struct nfp_net_dp *dp, - struct nfp_net_rx_ring *rx_ring) +netdev_tx_t nfp_net_tx(struct sk_buff *skb, struct net_device *netdev) { - nfp_nfd3_rx_ring_fill_freelist(dp, rx_ring); -} + struct nfp_net *nn = netdev_priv(netdev); -int -nfp_net_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) -{ - return nfp_nfd3_tx_ring_alloc(dp, tx_ring); + return nn->dp.ops->xmit(skb, netdev); } -void -nfp_net_tx_ring_free(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +bool __nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) { - nfp_nfd3_tx_ring_free(tx_ring); -} + struct nfp_net_r_vector *r_vec = &nn->r_vecs[0]; -int nfp_net_tx_ring_bufs_alloc(struct nfp_net_dp *dp, - struct nfp_net_tx_ring *tx_ring) -{ - return nfp_nfd3_tx_ring_bufs_alloc(dp, tx_ring); + return nn->dp.ops->ctrl_tx_one(nn, r_vec, skb, false); } -void nfp_net_tx_ring_bufs_free(struct nfp_net_dp *dp, - struct nfp_net_tx_ring *tx_ring) -{ - nfp_nfd3_tx_ring_bufs_free(dp, tx_ring); -} - -void -nfp_net_debugfs_print_tx_descs(struct seq_file *file, - struct nfp_net_r_vector *r_vec, - struct nfp_net_tx_ring *tx_ring, - u32 d_rd_p, u32 d_wr_p) +bool nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) { - nfp_nfd3_print_tx_descs(file, r_vec, tx_ring, d_rd_p, d_wr_p); -} + struct nfp_net_r_vector *r_vec = &nn->r_vecs[0]; + bool ret; -bool __nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) -{ - return __nfp_nfd3_ctrl_tx(nn, skb); -} + spin_lock_bh(&r_vec->lock); + ret = nn->dp.ops->ctrl_tx_one(nn, r_vec, skb, false); + spin_unlock_bh(&r_vec->lock); -bool nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb) -{ - return nfp_nfd3_ctrl_tx(nn, skb); + return ret; } diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h index 30ccdf5aa819..25af0e3c7af7 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h @@ -5,7 +5,6 @@ #define _NFP_NET_DP_ #include "nfp_net.h" -#include "nfd3/nfd3.h" static inline dma_addr_t nfp_net_dma_map_rx(struct nfp_net_dp *dp, void *frag) { @@ -100,21 +99,103 @@ void nfp_net_rx_rings_free(struct nfp_net_dp *dp); void nfp_net_tx_rings_free(struct nfp_net_dp *dp); void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring); -void -nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); -void nfp_net_rx_ring_fill_freelist(struct nfp_net_dp *dp, - struct nfp_net_rx_ring *rx_ring); -int -nfp_net_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); -void -nfp_net_tx_ring_free(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring); -int nfp_net_tx_ring_bufs_alloc(struct nfp_net_dp *dp, - struct nfp_net_tx_ring *tx_ring); -void nfp_net_tx_ring_bufs_free(struct nfp_net_dp *dp, - struct nfp_net_tx_ring *tx_ring); -void -nfp_net_debugfs_print_tx_descs(struct seq_file *file, +enum nfp_nfd_version { + NFP_NFD_VER_NFD3, +}; + +/** + * struct nfp_dp_ops - Hooks to wrap different implementation of different dp + * @version: Indicate dp type + * @poll: Napi poll for normal rx/tx + * @xsk_poll: Napi poll when xsk is enabled + * @ctrl_poll: Tasklet poll for ctrl rx/tx + * @xmit: Xmit for normal path + * @ctrl_tx_one: Xmit for ctrl path + * @rx_ring_fill_freelist: Give buffers from the ring to FW + * @tx_ring_alloc: Allocate resource for a TX ring + * @tx_ring_reset: Free any untransmitted buffers and reset pointers + * @tx_ring_free: Free resources allocated to a TX ring + * @tx_ring_bufs_alloc: Allocate resource for each TX buffer + * @tx_ring_bufs_free: Free resources allocated to each TX buffer + * @print_tx_descs: Show TX ring's info for debug purpose + */ +struct nfp_dp_ops { + enum nfp_nfd_version version; + + int (*poll)(struct napi_struct *napi, int budget); + int (*xsk_poll)(struct napi_struct *napi, int budget); + void (*ctrl_poll)(struct tasklet_struct *t); + netdev_tx_t (*xmit)(struct sk_buff *skb, struct net_device *netdev); + bool (*ctrl_tx_one)(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, + struct sk_buff *skb, bool old); + void (*rx_ring_fill_freelist)(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring); + int (*tx_ring_alloc)(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring); + void (*tx_ring_reset)(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring); + void (*tx_ring_free)(struct nfp_net_tx_ring *tx_ring); + int (*tx_ring_bufs_alloc)(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring); + void (*tx_ring_bufs_free)(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring); + + void (*print_tx_descs)(struct seq_file *file, struct nfp_net_r_vector *r_vec, struct nfp_net_tx_ring *tx_ring, u32 d_rd_p, u32 d_wr_p); +}; + +static inline void +nfp_net_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + return dp->ops->tx_ring_reset(dp, tx_ring); +} + +static inline void +nfp_net_rx_ring_fill_freelist(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring) +{ + dp->ops->rx_ring_fill_freelist(dp, rx_ring); +} + +static inline int +nfp_net_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + return dp->ops->tx_ring_alloc(dp, tx_ring); +} + +static inline void +nfp_net_tx_ring_free(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + dp->ops->tx_ring_free(tx_ring); +} + +static inline int +nfp_net_tx_ring_bufs_alloc(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring) +{ + return dp->ops->tx_ring_bufs_alloc(dp, tx_ring); +} + +static inline void +nfp_net_tx_ring_bufs_free(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring) +{ + dp->ops->tx_ring_bufs_free(dp, tx_ring); +} + +static inline void +nfp_net_debugfs_print_tx_descs(struct seq_file *file, struct nfp_net_dp *dp, + struct nfp_net_r_vector *r_vec, + struct nfp_net_tx_ring *tx_ring, + u32 d_rd_p, u32 d_wr_p) +{ + dp->ops->print_tx_descs(file, r_vec, tx_ring, d_rd_p, d_wr_p); +} + +extern const struct nfp_dp_ops nfp_nfd3_ops; + +netdev_tx_t nfp_net_tx(struct sk_buff *skb, struct net_device *netdev); + #endif /* _NFP_NET_DP_ */ From patchwork Mon Mar 21 10:42:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787077 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E74F1C433EF for ; Mon, 21 Mar 2022 10:42:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346435AbiCUKoD (ORCPT ); Mon, 21 Mar 2022 06:44:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34438 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346430AbiCUKoA (ORCPT ); Mon, 21 Mar 2022 06:44:00 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7367147AE8 for ; Mon, 21 Mar 2022 03:42:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EkscJbjna8cFcS4qF5sWryRbg7BoNAOzaoikQS7/CJq2Pff+K+q499VBoEVUliGnlQe8dYs9kHHnEfQMDf74eBmKTexkhVmWJNnHi4jHXoEVpl9WpcnpmUEZwkOoDGT2y5vuUJVdRWP1teQ6ASJuqrOwZlYc8XOlXtHMZjTLlk4JHXuDob7iRvNs5F1F1mFr9FFv2n5Nbu2kLjKWFn//CZqUGc1iGUqFv8AdUHDmNtofzIEyAyIzQOzgTq6Y0EgGA3DACkhjZWwRVpV3m3Q6wI4g3Wh9AcK2X0j5JZ7b5i9m0qq54gcw7pUz0A4arslCOg2o3ucp3kz/Oe8tAOUXfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rCVXXopX/Tc+V/yC47aaVFkT3EJ85MlTUU/eVVjkyG4=; b=MmVxjh+C58PBF885OnjWovYWoafsUGzBrWloRnDYaPNGDDnVqpzciP58Cjl7dWQatKFp1sGvSDj/xETOPFhGW0DS2apz7Kxuf4Waw7ym3atFv6tBF3rDeeitdN6HYY+OjTFE+ebGroROqp74J8HGKhsmbzxiKLFJWl+lHrz1LmOgeeHD7OLsun5jb6d3M0i2Xa4+RP7u48fAovapc4EldVM5qdTFUGllg9oo+1V3LVf3OMPRDN2MyskHKl78n72saWhqp0O5FvRB8/mbAPAO4G0T7QqtSCcLCLVXf//gwZLXL99kE5JyZvhZfvFIJ/fcRLrXUqGv8OkvJRvw0WJV2w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rCVXXopX/Tc+V/yC47aaVFkT3EJ85MlTUU/eVVjkyG4=; b=ZamoSrUn6WIIcIqE0H0RG9EJRmcrX7LS8ZC8LaZZjqxdkTLTNOuJ1Gz4XRFbgdSm1QPbmzXz384ea5M80ikwt1PMGou/yAlhPJc/NyRLX1vKXUp+qQoJgaiyIujNbAH75dRV8dG4Sos8lWQMeHV8oN10bqNpwQNEf55QE+o728s= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:31 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:31 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 04/10] nfp: prepare for multi-part descriptors Date: Mon, 21 Mar 2022 11:42:03 +0100 Message-Id: <20220321104209.273535-5-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 25497156-08c1-4797-e796-08da0b278097 X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: B6QQXJ1cRYF7SEEksoNxScoHUKKYc/RV5bLoib3zcZAllqhEYHObHqjfG4dE4++oNAFIU7p8WOai/cKX7fbl/OPs9dVRcaCrqpcM8cgnXnOqn6fskdF/WzxgKy1x1EEdyLxahVRoH1DhsEH1EWzqNOaiZFPX32lPB0mbmLaTNbZe5x9u+Mv97SYYwSu5EAeGVpUOEETB9sWjqr/LO9+mICUERm6MBpDCGbMz9udMfPj1aBrx3ShzJaIK3r1gcKx3hOIABX/ziYgzNgBDjrpRH7UEMBZu0C2Z1nEvXARXtNTWjn0DauQqix3sEv3+GxqF8UqV1vueCw5XZP8IM0AEnqjjy7aOOinWK5D9SSRkjm45FhDvv2jXO3exPI3hjCn9HjnyuKAJXeIzHRBt5k21S7Q9Vths9PcCC1UvPRuvvTjtXA4O74/6P7pKhoE/BT2Rx5uylGHm8hDcQA0+RmbwIiRN16fzlEPb6bs0wdxOzq3CkjG1v2vw/4SXRTDXuX51ULwZ/lZrzHkFQ4LafedUqIMCL1jfsWkx8mh8GrJ+G0a6+Qnr9unhTKMiCLrbKPTwJp/j63EFMvpg4KjqiZbVVHrd4NGAPbts4ZypBLiBNActRIas/sgfZmEL5uv3yTZX3e2A3fZ7fkQlJQ0DnkV46A== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(83380400001)(508600001)(36756003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: h1hBqus7dNXlhjQAmlB1ouAd0f6YCHmh0X8m/zaDM96uytG4ONA4fTLAwczLUCvA9CiyabPGl25zEmR/b9/RqK+kTtRDQ9wxr7PpdzJuFrbHyS+tPKh4Le7Mni6Yr+sQmWfwNFkhWG7Q3ygFx8MyCG82ofvmDv7cJMUkOWQ6KmK1Lz5F4TXIz5f/CV9ueCkhGCkxBXwzSW4kEPKm756qKw9j2GI9i94z43TBWGe8fsvWem/iv/LNJAikuVzbNZNq4M7AB5aGudqevesvY+xrBzeihO1X8d/WUfNGrxUkpma3Yz71paJgz1NFUCigKJFNLGv3CM9pTRUre3e8OCxY/0k3fx1Jdw3SD0UbvsMiyzBghMt5tnh8pwMRex95SqOVMBi+9zgLzRW3LjAPRu1XDUbJIElDZd7NaaF/kTbyEUthlm2mAqUjBIQkP//o2IZyq6ZF1vgmpZxZx1o//mwha5N2ScBNWDa+1ozTipzsxQ8GI2MHFhit59w5kO72zgmxLbyufyoNu60l7dzzdsq0g4eE2a81PVDOk2CVipwHMuXSi7tE2Z9QeV2UjptBkhqUQg/UccCu1EenPHzWEyJwaotCmHMabbHvOpABdjjMuiAn7u4canAJ/yld4LzzF+eeOjo8ahIe9x6YfGApc8OeK7cf+v44vwXRXT6B8ssC5gggqusqHbYE0hp07xbQM5g5m+hlpt1TYtm+CXe+JLoHMGGR6pFy3HSQ1BVvZxcBD+zZ8vHB4tQAv1ZD7wcnvL0y97hz+Ynze/nkwAmMKPOpPAYk+HdfpHTlf/1Hws7cdhGI3bQ6Q/FJDtqkPqc/4yYoMV5sCN4uvpgutEpgP8vAqroIIExGVBstPzs9yAvhIbT3q5WhnksDG1+P+fzDZrXIlFZXpZ5BrAU4SVqu6ndm9IJ/Gwx9yBLJmiCQm6koe6uTptO9NBvDGx99tVVt1OTBWeKLqmSJT0h6EniXXy94cJcxZB/g/TLD5UZhVGzqR7ia/R9TUVYNupScLvGZW3f4op5eeY75nlNRSNZMx6Zm1J5V9R370sQnzZ9mtmjQMkHD/2axbLM+m6Xo6IQ1BiAcxaFQwdW5sKXGfYjgm7fd4BzP/yB5m1lG8TLlSmcYe+Rg6DFhZQbVyt+z4cKANox2HpKXWpUJeYgKFWeNZyOEXiNl/0F3FFMjpWUTrzJry6tYlR4kl8zBhIAivLb7wKVmBt3maumsdDxHYu9I60XxwDNl/8UF2YiJmJGyjwXD1TDbHQlKoh7mApYLGtTheY/WVXXFzvtE7PYnBeP5pZ9wfRoL6Hgiqi/ltlZqk9jMmjrqYY3et44nyFI+Nvg3+qvX7vvz4kR6xLGKArtssRRwitNaV9Vtsgh7RDYbKFLkOaLytpJj+AaD/64Vlfz8QBuxuK//LhoN2skvYGfs0lkr/fBeMeVDdsa38m1+wAWr/BbU4mFG4fIKmuo/2VIZDlppMQ98pch72rJ7nbsc6rAH5seh0+kBE7HnAJxu3KcNaGNlnEgkrN7H51zn2wkz4ru3GHEWBkoMjWz2hTGbz30GXg== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 25497156-08c1-4797-e796-08da0b278097 X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:31.6196 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: KEIdZX28qGyjo9Q3VPt591CkXgqqKCmUynr/UOH5lg9wY2dEfpIgCBvEZ1eWk0sUB4p+Z4zUnrzJKmIjp9KFMSY4LD+AiDODleMUP9g5yC4= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski New datapaths may use multiple descriptor units to describe a single packet. Prepare for that by adding a descriptors per simple frame constant into ring size calculations. Signed-off-by: Jakub Kicinski Signed-off-by: Fei Qin Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/nfd3/rings.c | 1 + drivers/net/ethernet/netronome/nfp/nfp_net.h | 4 ++-- drivers/net/ethernet/netronome/nfp/nfp_net_dp.h | 2 ++ drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c | 8 +++++--- 4 files changed, 10 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/rings.c b/drivers/net/ethernet/netronome/nfp/nfd3/rings.c index 342871d23e15..3ebbedd8ba64 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/rings.c +++ b/drivers/net/ethernet/netronome/nfp/nfd3/rings.c @@ -244,6 +244,7 @@ nfp_nfd3_print_tx_descs(struct seq_file *file, const struct nfp_dp_ops nfp_nfd3_ops = { .version = NFP_NFD_VER_NFD3, + .tx_min_desc_per_pkt = 1, .poll = nfp_nfd3_poll, .xsk_poll = nfp_nfd3_xsk_poll, .ctrl_poll = nfp_nfd3_ctrl_poll, diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index 13a9e6731d0d..d4b82c893c2a 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -441,8 +441,8 @@ struct nfp_stat_pair { * @ctrl_bar: Pointer to mapped control BAR * * @ops: Callbacks and parameters for this vNIC's NFD version - * @txd_cnt: Size of the TX ring in number of descriptors - * @rxd_cnt: Size of the RX ring in number of descriptors + * @txd_cnt: Size of the TX ring in number of min size packets + * @rxd_cnt: Size of the RX ring in number of min size packets * @num_r_vecs: Number of used ring vectors * @num_tx_rings: Currently configured number of TX rings * @num_stack_tx_rings: Number of TX rings used by the stack (not XDP) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h index 25af0e3c7af7..81be8d17fa93 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h @@ -106,6 +106,7 @@ enum nfp_nfd_version { /** * struct nfp_dp_ops - Hooks to wrap different implementation of different dp * @version: Indicate dp type + * @tx_min_desc_per_pkt: Minimal TX descs needed for each packet * @poll: Napi poll for normal rx/tx * @xsk_poll: Napi poll when xsk is enabled * @ctrl_poll: Tasklet poll for ctrl rx/tx @@ -121,6 +122,7 @@ enum nfp_nfd_version { */ struct nfp_dp_ops { enum nfp_nfd_version version; + unsigned int tx_min_desc_per_pkt; int (*poll)(struct napi_struct *napi, int budget); int (*xsk_poll)(struct napi_struct *napi, int budget); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c index b9abae176793..7d7150600485 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c @@ -26,6 +26,7 @@ #include "nfp_app.h" #include "nfp_main.h" #include "nfp_net_ctrl.h" +#include "nfp_net_dp.h" #include "nfp_net.h" #include "nfp_port.h" @@ -390,7 +391,7 @@ static void nfp_net_get_ringparam(struct net_device *netdev, u32 qc_max = nn->dev_info->max_qc_size; ring->rx_max_pending = qc_max; - ring->tx_max_pending = qc_max; + ring->tx_max_pending = qc_max / nn->dp.ops->tx_min_desc_per_pkt; ring->rx_pending = nn->dp.rxd_cnt; ring->tx_pending = nn->dp.txd_cnt; } @@ -414,8 +415,8 @@ static int nfp_net_set_ringparam(struct net_device *netdev, struct kernel_ethtool_ringparam *kernel_ring, struct netlink_ext_ack *extack) { + u32 tx_dpp, qc_min, qc_max, rxd_cnt, txd_cnt; struct nfp_net *nn = netdev_priv(netdev); - u32 qc_min, qc_max, rxd_cnt, txd_cnt; /* We don't have separate queues/rings for small/large frames. */ if (ring->rx_mini_pending || ring->rx_jumbo_pending) @@ -423,12 +424,13 @@ static int nfp_net_set_ringparam(struct net_device *netdev, qc_min = nn->dev_info->min_qc_size; qc_max = nn->dev_info->max_qc_size; + tx_dpp = nn->dp.ops->tx_min_desc_per_pkt; /* Round up to supported values */ rxd_cnt = roundup_pow_of_two(ring->rx_pending); txd_cnt = roundup_pow_of_two(ring->tx_pending); if (rxd_cnt < qc_min || rxd_cnt > qc_max || - txd_cnt < qc_min || txd_cnt > qc_max) + txd_cnt < qc_min / tx_dpp || txd_cnt > qc_max / tx_dpp) return -EINVAL; if (nn->dp.rxd_cnt == rxd_cnt && nn->dp.txd_cnt == txd_cnt) From patchwork Mon Mar 21 10:42:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787079 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36BD6C433FE for ; Mon, 21 Mar 2022 10:42:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346439AbiCUKoS (ORCPT ); Mon, 21 Mar 2022 06:44:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346437AbiCUKoF (ORCPT ); Mon, 21 Mar 2022 06:44:05 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD4CC149647 for ; Mon, 21 Mar 2022 03:42:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=N8ctc2em3unUeg3cKy7UfBnafmn1dM1vYYUz6vduQ3y4KKUa2668SX0KonZeNGtFlkR+1Oko6Q71N/TuOWU8ba0TbvgTyjvdHmz4kCkcV7hWJ+j+fRf2l8i3zmwsaoma3lHzAr1to1CirnXKlwej8VP0Jt7MsbKHGXnYlXLxAm6JVzhZaC71e0WwscfSDo+PCWepClFH8xcnfXVbYJE1nayiu311O/gRN0uViuS5BgcLxVwOhHdIR0MUoSUOipiruL6kXgG/NjTu2DfalRb2uhnDLB+bNrOwf4AA7fa69R4us1guKwzcJXxMvu6Ldb7megF1hqHohNDwFzcshHpynA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D76txMFUiBYznc6iHkRmbExZvdb47cjqHG3KPOXcD8w=; b=HviIBmZLQL3dBLM6BmMjSM3Fhgq8sGuyNXFs4KalyydRY0rYAZA7vXe71C1h33fjlFQMu9KrwtP6GEWYTeX3/jQjnTwt3qrnxoh+Z+VENR6Ys816wC8bpPbmUv8RUy+FsgIhETMXcT2fOcPEQZTv4lrKdqGfbH3rny3JgdOCmIjkvjSy/2kg1nNyFYUeq1fsylTK4UT/5v5biB6azPjKbzRRt4he3z6e9jFpLpmkGzhILLsbKwspjbdwGkMva0jspdTftxhV4tIRjhhl8hl6mQsKTVm49gTlCa4O2G5JS3Vd8mD1Hk6ajqCwRo6VWLlUYczER2FzdYcLqiOpEMcs1w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=D76txMFUiBYznc6iHkRmbExZvdb47cjqHG3KPOXcD8w=; b=KslvJV3HEWEOLsYg5WL+u0KmlXU249LYUVfckbIW2EpiW7Y73BDLaMVInFd+ZA+UZmWWdyc2kFcm5LQS+mY49L0n0kzuOAxp5QCCflIzBn8+yl9A3ekjwaLQvWpPR+iuusGJ143n6y/qJf1qT2L1aLehjq2qLSQOh83V0IUV3WY= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:33 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:33 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 05/10] nfp: move tx_ring->qcidx into cold data Date: Mon, 21 Mar 2022 11:42:04 +0100 Message-Id: <20220321104209.273535-6-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a23c7167-64e4-4bcc-5699-08da0b278160 X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +DdPpvo02hkBPUDDX9aS2EScz+jTNo9VwedSsdNL5J500MXdzBOUk2GVvEk944b0WE+zA4LaHC3nkXN5o3MsavxklfPPU1Y2QhAEFLTfnYoNAeujHCcanQgE+s1mZQjXOea/9NufM3YLYiHKgmVC4bXiHQNhHyFItX2AKNf00nst1mHhRvc3ZDd0XrxlwrNImPCUzIq+h0WKOwZn9rh2jppJhu49RTtgf0qIv4tEhTkyAlLDKxEwvE99OK3qhnokQzHsDYNkCzD794DtUuWGUYMstcqnhNiquz3RXDR1Po1eggfDyKqUz68uja5UmuSzo0PxmlQllxGMQVIwqjXs/JMDRxlvDhM2Fbtn6tfwvhYHIJKmwxwakT2ZXYO4B0BoHJKRWaBnH5kjSnF9dBpHsGIjabFBM5DpDNqxbwvVL3cDAmYxskhELhuYzUxGzXNP0IxtMbGg3VBj3GjgrfgK88bFKzNEYWg3VfAoXf3mnsdCuz7WUMvvPDbK5dZ8gCsbvDKPU5ccG5pkm8mlcYqtD6EBOw6XUeS+M4zqDqowu94yzFhF4eUCARJBMtjsgFhfvAuUOwWKfb98SF5hYQ5QOhAHMywLmlOCXSUhj7IL3acHzeVIRRlofWhZmnCighhM8JuqW0SB37H8H64wS3VYGg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(83380400001)(508600001)(36756003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: K0jbKtXHw37piHk1C/dazl1j8JqboU1m5ajR7+OFs6dFc+S6rnmkrA5+0KRqugY/c+nubpQ6nz86B0qU7JcqlSM159JH4gs0jdNGqL/tPzxZAQCIfYcQEWUCvLLE6ZREuMfnzjkYL1GM6Ximbwa5KpVMZJFhWDf3jSgEuDYcsQ+ytPlNYnumkXx25EVmwEq2a22QULeFD27khBLyk5Fl0hRzBUnxNUHnvd4iHbJAfAuQ1Ubal+GJhTgbtw9kQpfLAqHwFUuCueHcO/qm1rkzp+7rXa9ZN2lGGWG7a/kR9rbBVmvZvhBCk6XsBvo73F3oU6jq47k/60RkJ1JTk/dYw41vK63Wu0nP3smUDNgBj+zrLsnCiNaKh6HN1OjchbSLsYYyXe1L2hCVVIV0XbfTclegmPAQnpbQfrnp54cW0JpzlTwACA5l2ZtKHGKALLd9buOiM0l45NkruzMW2lCjHMkoFP8hbYFxMyXV5V5Sjjp+g8pNor/zcxtrhTmoSMsIE6AV2MROwgJwbXQ8+vOFyAAdIqqcJHEyXXSLbqCYQ8nggpO+BLvvoydjdMz8mFF7PvMC9DpIgZEE/8+4Tijo4hfCcwPauAU+heL63+ccakVDwrYFe8CrKRcQVkwwnfgCXGC1UfBoxDBssJImGxK1YdMgCnSlLmDNDUKEh9OtMk+vZIKsSPoUlTDRaoM8c4OUipIPZvj+0fdinxQ1xCUkkDd53/S3o8mip+EGr/7hQ6C7yYqdfpKT0KzmTwKjwLIKbRZ2DUIVXE19a9KSOUzTJBL2+QpoirPbKJV3sNnm77od5Bjm9Y15UGZpEmB0bvSOT/K+DQ2pHA/AraXnep4UPdptA0E1Jiw70SDwTTV5t44Q0u0c5yHuZAK4SXO1X3wIyPQE6Eb5sYDb9YZHoKAiMs3PmGmDZtT0FUzzyH8gMWFd1MgrtqntNzBCExZzqJM2styt0ixLLVMOkrjOgZdfJO6D0JVTZSFqrZ39ZdMu8GhyqQcXAFm7yob/S18s99LNYVCmgdLE4VerSxPGPXV0jH0ehigRD3cxD2QeEb0eLAUCndwZkpiyCoNO6H8e01lBiT9MOeW5RyLy7RKsacAEKzdBIsp2T0K9OmwZPhIs8V4WXwu4hXDEylcF7Im4JwZcr5UiOT4IH/Mv+r+2q5evJnGV6hxcXRFO/q7qJnQeAszfcbKM5fsTPGGsj2w7GbQ+qF5g/3hMD/8yPST7aneF47dPKJE2OU3vLpvr75ZXUD4wl+gSPVqqN1rzEhPEDgmp91Qo/2E0eblRe0aQ/6boRfZsqgA4paIy6/ZZstTj1LAXKsiEhfs+eIplXgOgTo1wuDC+MkdClpMwGZmmwUoHhW+pLrnoKItbT0sC388As2TeK7+zrnEi6uK5Q+02X2ZdvNeP0ZNYbxeiU68HZrEYrAyoKdkkQgBOncCvLdxCGkcWt+W99GtzIplqQa9+MDd5UmFzOC0basVPtoXaJHmFtXMj1leqkQtcn0DG/cMnF1g8CSGQN1HNsZi01zQ4lAIoGUpuCn8fR1ZPH9G78W2pSg== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: a23c7167-64e4-4bcc-5699-08da0b278160 X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:32.9488 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: RCsmrcqQ1wDNZc+27aw5eIPck0eQN1hebuAlqeJOzJt29eUC+IFgipHttr/Gsw3zGU9Q3qlrzmecwTGgE1YPqTyDa80DTawTsmvkNKf2UpE= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski QCidx is not used on fast path, move it to the lower cacheline. Signed-off-by: Jakub Kicinski Signed-off-by: Fei Qin Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/nfp_net.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index d4b82c893c2a..4e288b8f3510 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -125,7 +125,6 @@ struct nfp_nfd3_tx_buf; * struct nfp_net_tx_ring - TX ring structure * @r_vec: Back pointer to ring vector structure * @idx: Ring index from Linux's perspective - * @qcidx: Queue Controller Peripheral (QCP) queue index for the TX queue * @qcp_q: Pointer to base of the QCP TX queue * @cnt: Size of the queue in number of descriptors * @wr_p: TX ring write pointer (free running) @@ -135,6 +134,8 @@ struct nfp_nfd3_tx_buf; * (used for .xmit_more delayed kick) * @txbufs: Array of transmitted TX buffers, to free on transmit * @txds: Virtual address of TX ring in host memory + * + * @qcidx: Queue Controller Peripheral (QCP) queue index for the TX queue * @dma: DMA address of the TX ring * @size: Size, in bytes, of the TX ring (needed to free) * @is_xdp: Is this a XDP TX ring? @@ -143,7 +144,6 @@ struct nfp_net_tx_ring { struct nfp_net_r_vector *r_vec; u32 idx; - int qcidx; u8 __iomem *qcp_q; u32 cnt; @@ -156,6 +156,9 @@ struct nfp_net_tx_ring { struct nfp_nfd3_tx_buf *txbufs; struct nfp_nfd3_tx_desc *txds; + /* Cold data follows */ + int qcidx; + dma_addr_t dma; size_t size; bool is_xdp; From patchwork Mon Mar 21 10:42:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787082 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67E7FC433EF for ; Mon, 21 Mar 2022 10:43:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346448AbiCUKoY (ORCPT ); Mon, 21 Mar 2022 06:44:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346440AbiCUKoO (ORCPT ); Mon, 21 Mar 2022 06:44:14 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2131.outbound.protection.outlook.com [40.107.243.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 211E614A6D5 for ; Mon, 21 Mar 2022 03:42:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TX8CdeL8WDtkCQi4+ZSPeu8h3S4INvLeu/M1zhp2OZdE6lT3OTiuQIpLBpbBviuFWdUgCkbUnEEk20Oub2ZLU1WWHjBu9TGaEltmsUL+8Y5biHQj+Lmcb/y7XPsAFejeuA9YRsW6pozJl80R0L2AAH4hs72O6Xaknmxcm/6RrLKDWCzuVAF7sPe2bJCOSZ1jJZguKDb6jvlNBSOGkD6QkR/HaY1R2QjfHUi9wd80IyRWd+Kl08v4QqAfJm97Qpt2ANrTCRDkD7j0CgnYzIoa4wmOrUM8KLYz96Uk8KKzUO+J3WEp1eDz9HrYjEf9IZkjj2pZw/hSiXqOAiYpA1X0qQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jJypa0WDicXTBdmLVogDUROeRRVF2MGqLM9Cycpjr6o=; b=Z4P+T2birFcMcaFtxseJSTM34WzwrPfAHLqGGnN+PnRSEy8r/jnguNSjIiiSSKIUkKPQX7IjluQ06PVjwQ18/KKObTQMSdPdGKWWGQnaKqGbaHL1qFu/HNVn81WSW3K8bdWMwESr/m2u/TUCYWv6ngv/UYQRAaLZuQtOWCDEnObSoFhm5lF+j4ESbr83oAzAIOaJ/2TEMlke3ADPAazJu1H3g76dQAT3ya5NWRIcjJV/WHX6QkNcidcMJuhNnvyZBzV4IEudgHP4M2mirYicDbYCz3TGSS54UhsoRaSBnPBKC/s64q+YVzDQzoEourZbwNgZnsjFoUBEXr2WWzPKzg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jJypa0WDicXTBdmLVogDUROeRRVF2MGqLM9Cycpjr6o=; b=Cekir4WCfshwYYMJ3gjLJXqN8nNkD5xlGRFIoSD0RG17SBSMU2/gd+aJIG5zMya34LsVKg1FSYXxi9aCPSUbsigqswasoFPkgvE3Wz/qgTeB2ikLGE8RYDurIghjFLwZeCFQoVX1txeR/fpK36JCbyMawU6YLkpNWJO0abszwio= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:34 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:34 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 06/10] nfp: use TX ring pointer write back Date: Mon, 21 Mar 2022 11:42:05 +0100 Message-Id: <20220321104209.273535-7-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 71d73e03-a39c-4688-fbec-08da0b27822a X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: pmisslyfVVUXeoVBZafP2eZaEaJPdXn2jR5pW/oJkmE51ZT7KyPbgPYvn+VcYB/sOd/xlyKGNRsNTtXP6CUc0VvnYzE41Av4FH9/xAHuoxZttQD1H9xw5BZHaDSGDTRJl9WlIXa1WLTVwxhTgauGnQ7dCzIx52uRHX1FhrBPgV7crmM9jru+0UKUV/DgYbiMQEnLjjHbzm5YM9Gev18urcZiSZADJmlFLqJkfcD2Aut+qjnQcEnD3q2NquFkfBZx8t2Hb4NcPgkIbSaeiDjht6Omc1Jk0TDWzlb+ImaEaHI186nbZ1m6NNp0qRyS77FstC1aGzboGEs99BdNnVESP4xY+ETAwbfZiypf4ut6o5TV9cQ9QYobuYxXwMUOWoeT1MN2i80orfWUQ/oqWSarbMFZ8GzYuMCuFkcJtHWLaKC1MwZkcIcR3b+qlZ7MEtpXjgS5tY8Xs5/Y1bIAkXP2hfU8vIAuHLnd7Ze0/U5w9VG73VogfodHe4JEhWiaZ78zu8VINgUm/r/yWsHGK6cwuEn+3JkbkaRg9TJ+fCONt538jVKjYH5qtbW/kanAjmtb8hGdWL4cN2zz224G0NV5s3DSAREBOM+ro6Eu/fTS/AXM8sdnt6/EuIYrD2c33VguyD5I24Cbgpw+dd0Ohh325A== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(83380400001)(508600001)(36756003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: uMHpqeve+L30q1kwnLAJDSZGq5vRsR2Oi86SzfOspJ6CLCUK0WOmsEL9GzimC8pRXxgnI6RoHJBIb1M2AdszEjF9GoE+vXaBkkn57nZiNt0oV4B4oRg24wClY0izMlgYVheqRPOZ0ZMn4oNoZiX4np8cA/0esUZCK7rg6vIJMxgTDWG1x0Gc/l4kPuWueiJgTTc6erWc56+qfqCgwBIm340FLUrJcLQM5aNKWFGltVHoW/dEgN7Z7g1SGwQsRBgLWQAtR1E254i1EpD36REAlCcxUAbzQg/YUQXXUOTGTO4f7zJ5mVvTKG5P7+503E0dnQ1Wymye7YuWFsISTkDUA9s08moOGm2LJ/MQN2/B5pT3jFmEPNe+ZI2RIkLZe4QoCDzmb4v/uzLufGhGnoJek1XdqfNL3ZPrnjHgiNRzK48tbb+cH1CW4GRufq0MHgw00gsAMdOvsJN+dtIuahS0zdjgxMdcsmL6KwoxLNmp6xrIfDmji3u6tqUIl87I5BIAmNxpyFQ2mqtqZuaOCOWRtxxqmTLzujqo6JUJRrw7pYDJihMEpXeGKl2PNWYNj3dOWQMA9qBhxjfRsVkhfpY6atLqpheCBAN86gWehUE4B+Sq5aAooM1/6NV4Y/z6oPmJ/XodA91ME8ezvbCRlzQDuIp6sY+ae76uM0D0S6v0WXy4kc8UlAeK9JoEFZxVCsPpqu212QF0dWMMHHIAegrw6tvVlow9YImQHpH+aSqC5GrkaG4nlEeQl14Gsns9PDmg9wpFMsqIIMJy9pfZ8fYLj6ucgUrR3X9YMxdI+w4gzTix1BEYEzwIUEyNiMRuom4Xdx5LxnbGbklT/BT+4EvncDpvQ4LtipiKB3iYVuqTHyn3ZnI6XQNKVxvl+BTeA9tjA2FFpzCAFpIPBTwZ6WFj9QmpSbPEZvELETWELJbK22kUVWeSdCt+7493xL8BWtNdnIgUXAqKhjNaTSRZbJE3h3aVMFpR/8ZI4PmwEk5NQBbJqux7eucKKRTwTlH4Nz/3GK30f/cZ0sXJKm3iz8+6k/n6jyjHjnRPpqy9fMEHLXoPkt2fDz7TAT1+yWi0LPoQSe5gSeAzm9G80wd+QVY/TU4tOZPBdtQ2GHa5Dji89VWzp9ZB4C3G7yCcbxT2ai6PUH4UYiZOn2mzSc8SLVJCsHKmBKLEjegSw/sWEj0pEZSbfmZdJXe9vQrQcyLiB6XVdwmf8fRspG5fMh4+j/awCBzYUOLgwC38goX7NqabiSl7zz1qJTNGUjl2x9lV/CzcwZdLNwCbG8LIvoRgAHPEjCp+MSw57orzjQ1lVKjOHEHXR0SLu/IBgfJxSQjlG+W1DJpTNwrFbtoim/s1/D5cwUhJMDHwFgV01kDJzTtuuI0FURf/TGf8GR2p+1vPljjAaYPMxHeDmS4eGl/8VIFO9x1eMFbIjwqp0ZHJggqYypszC7mJ6KX6yd0UcD7dIyWMurQbXoupXJwNcBM+6b2SgpGrSrwwF3fmGDTEc6m5ZgozpJtFJjFg/irB5fAuXWXwLAifb5QNCxy0w8RgD8LbDw== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 71d73e03-a39c-4688-fbec-08da0b27822a X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:34.4331 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: XDoWQg1kkwgbCJ3l4w1yPvXNVVWIYYMS3uwSgFB5PXq0HTiE3wZ/p0UEvc9pZcagF0luRzy6q5Tl+GkhUkA5pb2pS38+jpFT2xn8iqSJnqk= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski Newer versions of the PCIe microcode support writing back the position of the TX pointer back into host memory. This speeds up TX completions, because we avoid a read from device memory (replacing PCIe read with DMA coherent read). Signed-off-by: Jakub Kicinski Signed-off-by: Fei Qin Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/nfd3/dp.c | 5 ++-- drivers/net/ethernet/netronome/nfp/nfp_net.h | 7 +++++ .../ethernet/netronome/nfp/nfp_net_common.c | 9 +++++- .../ethernet/netronome/nfp/nfp_net_debugfs.c | 5 +++- .../net/ethernet/netronome/nfp/nfp_net_dp.c | 29 +++++++++++++++++-- .../net/ethernet/netronome/nfp/nfp_net_dp.h | 8 +++++ 6 files changed, 56 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c index 619f4d09e4e0..7db56abaa582 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c @@ -392,7 +392,7 @@ void nfp_nfd3_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget) return; /* Work out how many descriptors have been transmitted */ - qcp_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); + qcp_rd_p = nfp_net_read_tx_cmpl(tx_ring, dp); if (qcp_rd_p == tx_ring->qcp_rd_p) return; @@ -467,13 +467,14 @@ void nfp_nfd3_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget) static bool nfp_nfd3_xdp_complete(struct nfp_net_tx_ring *tx_ring) { struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; u32 done_pkts = 0, done_bytes = 0; bool done_all; int idx, todo; u32 qcp_rd_p; /* Work out how many descriptors have been transmitted */ - qcp_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); + qcp_rd_p = nfp_net_read_tx_cmpl(tx_ring, dp); if (qcp_rd_p == tx_ring->qcp_rd_p) return true; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index 4e288b8f3510..3c386972f69a 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -126,6 +126,7 @@ struct nfp_nfd3_tx_buf; * @r_vec: Back pointer to ring vector structure * @idx: Ring index from Linux's perspective * @qcp_q: Pointer to base of the QCP TX queue + * @txrwb: TX pointer write back area * @cnt: Size of the queue in number of descriptors * @wr_p: TX ring write pointer (free running) * @rd_p: TX ring read pointer (free running) @@ -145,6 +146,7 @@ struct nfp_net_tx_ring { u32 idx; u8 __iomem *qcp_q; + u64 *txrwb; u32 cnt; u32 wr_p; @@ -444,6 +446,8 @@ struct nfp_stat_pair { * @ctrl_bar: Pointer to mapped control BAR * * @ops: Callbacks and parameters for this vNIC's NFD version + * @txrwb: TX pointer write back area (indexed by queue id) + * @txrwb_dma: TX pointer write back area DMA address * @txd_cnt: Size of the TX ring in number of min size packets * @rxd_cnt: Size of the RX ring in number of min size packets * @num_r_vecs: Number of used ring vectors @@ -480,6 +484,9 @@ struct nfp_net_dp { const struct nfp_dp_ops *ops; + u64 *txrwb; + dma_addr_t txrwb_dma; + unsigned int txd_cnt; unsigned int rxd_cnt; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index dd234f5228f1..5cac5563028c 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -1427,6 +1427,8 @@ struct nfp_net_dp *nfp_net_clone_dp(struct nfp_net *nn) new->rx_rings = NULL; new->num_r_vecs = 0; new->num_stack_tx_rings = 0; + new->txrwb = NULL; + new->txrwb_dma = 0; return new; } @@ -1963,7 +1965,7 @@ void nfp_net_info(struct nfp_net *nn) nn->fw_ver.resv, nn->fw_ver.class, nn->fw_ver.major, nn->fw_ver.minor, nn->max_mtu); - nn_info(nn, "CAP: %#x %s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n", + nn_info(nn, "CAP: %#x %s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n", nn->cap, nn->cap & NFP_NET_CFG_CTRL_PROMISC ? "PROMISC " : "", nn->cap & NFP_NET_CFG_CTRL_L2BC ? "L2BCFILT " : "", @@ -1981,6 +1983,7 @@ void nfp_net_info(struct nfp_net *nn) nn->cap & NFP_NET_CFG_CTRL_CTAG_FILTER ? "CTAG_FILTER " : "", nn->cap & NFP_NET_CFG_CTRL_MSIXAUTO ? "AUTOMASK " : "", nn->cap & NFP_NET_CFG_CTRL_IRQMOD ? "IRQMOD " : "", + nn->cap & NFP_NET_CFG_CTRL_TXRWB ? "TXRWB " : "", nn->cap & NFP_NET_CFG_CTRL_VXLAN ? "VXLAN " : "", nn->cap & NFP_NET_CFG_CTRL_NVGRE ? "NVGRE " : "", nn->cap & NFP_NET_CFG_CTRL_CSUM_COMPLETE ? @@ -2352,6 +2355,10 @@ int nfp_net_init(struct nfp_net *nn) nn->dp.ctrl |= NFP_NET_CFG_CTRL_IRQMOD; } + /* Enable TX pointer writeback, if supported */ + if (nn->cap & NFP_NET_CFG_CTRL_TXRWB) + nn->dp.ctrl |= NFP_NET_CFG_CTRL_TXRWB; + /* Stash the re-configuration queue away. First odd queue in TX Bar */ nn->qcp_cfg = nn->tx_bar + NFP_QCP_QUEUE_ADDR_SZ; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c index 791203d07ac7..d8b735ccf899 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c @@ -99,11 +99,14 @@ static int nfp_tx_q_show(struct seq_file *file, void *data) d_rd_p = nfp_qcp_rd_ptr_read(tx_ring->qcp_q); d_wr_p = nfp_qcp_wr_ptr_read(tx_ring->qcp_q); - seq_printf(file, "TX[%02d,%02d%s]: cnt=%u dma=%pad host=%p H_RD=%u H_WR=%u D_RD=%u D_WR=%u\n", + seq_printf(file, "TX[%02d,%02d%s]: cnt=%u dma=%pad host=%p H_RD=%u H_WR=%u D_RD=%u D_WR=%u", tx_ring->idx, tx_ring->qcidx, tx_ring == r_vec->tx_ring ? "" : "xdp", tx_ring->cnt, &tx_ring->dma, tx_ring->txds, tx_ring->rd_p, tx_ring->wr_p, d_rd_p, d_wr_p); + if (tx_ring->txrwb) + seq_printf(file, " TXRWB=%llu", *tx_ring->txrwb); + seq_putc(file, '\n'); nfp_net_debugfs_print_tx_descs(file, &nn->dp, r_vec, tx_ring, d_rd_p, d_wr_p); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c index 431bd2c13221..34dd94811df3 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.c @@ -44,12 +44,13 @@ void *nfp_net_rx_alloc_one(struct nfp_net_dp *dp, dma_addr_t *dma_addr) /** * nfp_net_tx_ring_init() - Fill in the boilerplate for a TX ring * @tx_ring: TX ring structure + * @dp: NFP Net data path struct * @r_vec: IRQ vector servicing this ring * @idx: Ring index * @is_xdp: Is this an XDP TX ring? */ static void -nfp_net_tx_ring_init(struct nfp_net_tx_ring *tx_ring, +nfp_net_tx_ring_init(struct nfp_net_tx_ring *tx_ring, struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, unsigned int idx, bool is_xdp) { @@ -61,6 +62,7 @@ nfp_net_tx_ring_init(struct nfp_net_tx_ring *tx_ring, u64_stats_init(&tx_ring->r_vec->tx_sync); tx_ring->qcidx = tx_ring->idx * nn->stride_tx; + tx_ring->txrwb = dp->txrwb ? &dp->txrwb[idx] : NULL; tx_ring->qcp_q = nn->tx_bar + NFP_QCP_QUEUE_OFF(tx_ring->qcidx); } @@ -187,14 +189,22 @@ int nfp_net_tx_rings_prepare(struct nfp_net *nn, struct nfp_net_dp *dp) if (!dp->tx_rings) return -ENOMEM; + if (dp->ctrl & NFP_NET_CFG_CTRL_TXRWB) { + dp->txrwb = dma_alloc_coherent(dp->dev, + dp->num_tx_rings * sizeof(u64), + &dp->txrwb_dma, GFP_KERNEL); + if (!dp->txrwb) + goto err_free_rings; + } + for (r = 0; r < dp->num_tx_rings; r++) { int bias = 0; if (r >= dp->num_stack_tx_rings) bias = dp->num_stack_tx_rings; - nfp_net_tx_ring_init(&dp->tx_rings[r], &nn->r_vecs[r - bias], - r, bias); + nfp_net_tx_ring_init(&dp->tx_rings[r], dp, + &nn->r_vecs[r - bias], r, bias); if (nfp_net_tx_ring_alloc(dp, &dp->tx_rings[r])) goto err_free_prev; @@ -211,6 +221,10 @@ int nfp_net_tx_rings_prepare(struct nfp_net *nn, struct nfp_net_dp *dp) err_free_ring: nfp_net_tx_ring_free(dp, &dp->tx_rings[r]); } + if (dp->txrwb) + dma_free_coherent(dp->dev, dp->num_tx_rings * sizeof(u64), + dp->txrwb, dp->txrwb_dma); +err_free_rings: kfree(dp->tx_rings); return -ENOMEM; } @@ -224,6 +238,9 @@ void nfp_net_tx_rings_free(struct nfp_net_dp *dp) nfp_net_tx_ring_free(dp, &dp->tx_rings[r]); } + if (dp->txrwb) + dma_free_coherent(dp->dev, dp->num_tx_rings * sizeof(u64), + dp->txrwb, dp->txrwb_dma); kfree(dp->tx_rings); } @@ -377,6 +394,11 @@ nfp_net_tx_ring_hw_cfg_write(struct nfp_net *nn, struct nfp_net_tx_ring *tx_ring, unsigned int idx) { nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(idx), tx_ring->dma); + if (tx_ring->txrwb) { + *tx_ring->txrwb = 0; + nn_writeq(nn, NFP_NET_CFG_TXR_WB_ADDR(idx), + nn->dp.txrwb_dma + idx * sizeof(u64)); + } nn_writeb(nn, NFP_NET_CFG_TXR_SZ(idx), ilog2(tx_ring->cnt)); nn_writeb(nn, NFP_NET_CFG_TXR_VEC(idx), tx_ring->r_vec->irq_entry); } @@ -388,6 +410,7 @@ void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx) nn_writeb(nn, NFP_NET_CFG_RXR_VEC(idx), 0); nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(idx), 0); + nn_writeq(nn, NFP_NET_CFG_TXR_WB_ADDR(idx), 0); nn_writeb(nn, NFP_NET_CFG_TXR_SZ(idx), 0); nn_writeb(nn, NFP_NET_CFG_TXR_VEC(idx), 0); } diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h index 81be8d17fa93..99579722aacf 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h @@ -60,6 +60,14 @@ static inline void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring) tx_ring->wr_ptr_add = 0; } +static inline u32 +nfp_net_read_tx_cmpl(struct nfp_net_tx_ring *tx_ring, struct nfp_net_dp *dp) +{ + if (tx_ring->txrwb) + return *tx_ring->txrwb; + return nfp_qcp_rd_ptr_read(tx_ring->qcp_q); +} + static inline void nfp_net_free_frag(void *frag, bool xdp) { if (!xdp) From patchwork Mon Mar 21 10:42:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787078 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F6B3C433EF for ; Mon, 21 Mar 2022 10:42:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346443AbiCUKoR (ORCPT ); Mon, 21 Mar 2022 06:44:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346439AbiCUKoO (ORCPT ); Mon, 21 Mar 2022 06:44:14 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C296C14966E for ; Mon, 21 Mar 2022 03:42:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SBss1r/x81XkVlu0/MIVrjqQ1rZuB8bJbfutpAqkWLzoAiN1Y7d2pFcZkkrLy5oOtTY3KcXceE1bOMnuyU5ix9iNUUOOMhP7N1A4Lf96DEtKZsyfBYnBZYuvqCjXQsSXsuC8CKtAdoghqtDoAYMIOzaB5P5CNjLwRTomc5rn3jiLboEaV0mDRTG0o/WyPC7CHL5ns9wD5oaLW+ArCjDdwQIwu8S06TwM3d+tuvZzGBafqepLiJpPuLzDYE6DFS/0CC8ANAOQczn17Ka0ch/SWTkumZSSkrJAJMfDAo8cysKApAshXWZk/m2Q9mW4817Mg9tuaHusade65XQIQJTX+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FZriSPFFVghuObPtOO1ichRMXSWu/dQVevKAhkZ0ASs=; b=HV1SQAHYA+ki0jAd38QKMqzLq08TyG5Q1RWi8WvRm87hr9dTj4C/XhelTtPT2QhjxYSKWe0r4h1wm3uDeGFqUC64CgJZJC2tviycAgsw7aYgrZAH7uf1MrvW1ApbkTXcznenBLnL+8+4Rfn2aTCXrEzfu9hcrY22H8EciIZzEEa5a1OfFxyM5t8U9hktSsHAWt8DtuJa43lX4PR+BSO11O9ZVLNiVhetqRk/KfSptcS8HYZV84k+2XSDOtniN9kJ6vqR9bT9nSYUAW+eixP/OeZemWTTNSe4XXQY2glQZ4q5Cbyo1Cvxt9XDeo6P1M2el+/hnC+9WYc5zyDHhSgF3w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FZriSPFFVghuObPtOO1ichRMXSWu/dQVevKAhkZ0ASs=; b=koBvVi1KBEbQadDlELtwx8N7PxQy/jYrCnrXcPEeYUxeYoVLrwHw5kf/Ej4oMmK1XSv0XiFzGtTnuCFWQTP+khYzOzS51wjX7uiEFAPrTIc/EwaMu/1creiBeVSHqHIpUAId8KzFJVWp+VD/t9LEdKVg7+apRh7PfAH4ew9H0j0= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:35 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:35 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 07/10] nfp: add per-data path feature mask Date: Mon, 21 Mar 2022 11:42:06 +0100 Message-Id: <20220321104209.273535-8-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fe9334c6-f9eb-4432-fefa-08da0b27830f X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iswJy6JCiIdu/vW2fQFuThbRGyR5XCuCBbVzes9/dDRcSlp9qfoSvuHVQzy7R3ckEpj0gyWjL9+fRhQIjcMg9MyTFHriUoz6kheLruh4whH/FgZxN5yZ0trzQdNB7I4lH5mmyYrsniMb3IC2cTgv0zjDphOg+x0nqALTe2Kt2enqibBIk6pfGJPrTqFUDJNIgmDcv2zJsvg3XXkavbFlBqBfZ/uiw7kNCAeOZYszKxntXAU0d9fN0TpqhTIsOWp/IV1aMJVIqrXaO2vVe9cC+7I8ZmoxM4TEJvWwezSKyllvLB/FXfhkqBlJICWZjJkafdE/ESJioLg7wLwGEhN/ItwaF1Vih7U7SQGe6hjl7AVRZJX7YV3w6cYCSZFv1MJKJOVEJrbc1wXLl0kPX7c/ELtK6MYyc1rkpBHNX6U3J93fDcBRT6OX1ZhFomI/mlAawwaYGTpqgoFHLoxUeIdFIKf4IjTUJyGUpLeXNQ4R8BLIunNa5O9HD/fZp2IO9+xSZJyKIip+GDTpthhTEyqdcscwGI5WEoKK7voPAGmMIIjeZ0DGbF4nItBOnRCIsHPVI13pS0iCBiyBboSMVAbI+BHzi90EWuLmoDcvDEa+smxZCvQmw/pjLq5zlZLXL1zeYsflKug+DxUpKEeKnlPW7Q== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(508600001)(36756003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 8t0gEDEUse/6e6PFj/yWy2ay5+PYdj3g4gSrMUMEu8A2kdphHGTyngZqaj6ADhzXZZvKsxVWKBZsiBBQ5GwpdME25EUd3/j7OnPZIOVJQzZHuf88Jkl5gH3jXcjgjiVSuPrEyOIzjf4hodSLm4uSHOBuMeK01tN58wYOrrtT9gHrQKQefamyluZOSzzCnvCaWcbxNtEvtGZT9VWJRnPnb6ED2ipf23k/gal4Njq9PjLo2PFX8C3mwDV1QHp3kOiny13lTQQHnuOIgf9UGeKBohyxWQZTYg3lW+g9oB5agYBva2OZTGDhdeMXbJlXDxGr/yLTUcRNy9ta2TwmcP3lWSIbF3ngnlTREdt3oW5w/4h7J39ZhHkEHSOpYc0Z7sn28CCOAugO1EsuEaJmVauhL9ZUpbVHp6ppHA6TRfXUhvdz5VMUwau8t1oAk/NAeMhdHrTBStKO1SHoI8OJ2Ip0M3FKReB1l/X0/5SQOpavouFIInMAkXYghedaqbuOMqIibcXXlMldLEoZr0pc4M+o9FGJE+tsfDmB8kRlJD8MfP19MYOWtolWeo0rNEL9MalgNYsABmPRy05JrRPocr6IgFOr5uDK8tlFFyYncQo8qoDWYmj4jOwnfN514eeaKhpOquaYhlSvbIkWkMDsOiZK3E+BvcrAgg8vfB7dQsP92p1ncqoxM+75sY/a7VQu7hSopEmhTf1+vWBmpz852+KhzMpRCrqKKdaV3vDx8DDtE5U5NuM7oh+/aAXTwmKq38cBXm14FYbYidaCKbO9+RmGSZEiTBZNA/IyzWxuzWSQYjiRclBGHMrWLC+Gtbu3n6JwLE3A0nhnlP3Rg3HDVTLfhVMwivdQba1/w/6xk3d9g8lDRt94vEFOMDNohe64vGHWeWDO4r8GOi6FZ2rOo71nNUBDqqFiJYJsPF30OyTqj+whWKu733D5jO0kc985h33WElv4KZ2o5gVluD/s/IbReQZVmfj0EDRQsKB5epX0lUGkistisZbY6rfi4qFaJ6FsYLpNdKirbrw94uV6rYLca5ysC4i05SN0/OF5HNFTKXIZCIz+AqRf8cgQm+YaSrTFHgS9nqbht5RiOxm4d8IWwh8JQKiyPs+DdapXZzFoYsjLYuX3qJ/EW5M54x1ppcFOdR9bNLya41NnnqueBNV36FF3t7Pz3EGtkkDZqSINadbvmqYa7DfKhLVZiKaFgYIzj81ZuC3YIf1166613IIOtXAiMuOCfMmFspHv9GgBcbRdb9ZP3UH5LxPCeMr6G69H06dTf/aCi/IKNu4KV0BcTcUEc+yT8GkW9zUVZCJwgbYK9zXM/g+WW03cjufirqdT+J1Cm36iqDqM9PIiZ7vN00jghW35vVv+SZ/TC3YRT9o1/RyrRBm/F8m77FUFvr3u2nqUsvU8NiDo05deWbINywj0lhzui0FC9fOqMsrvUKs0XhaDKzfl/Dpb/buVk00yb69oIDVF1gF85OKeV74nVHnDcMbVQD6TvQVC/HydhsSHdBgIr7sYkWAFzZTpDj7I6KTjGPHK+2e+t352HSCd/g== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: fe9334c6-f9eb-4432-fefa-08da0b27830f X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:35.7777 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: iq2L3K50VKFSFO56eJFgZXL6jq3pQnkUJ+Ia4sBJemNtIVuP7kh9sSpYrqbEJDfcO15noI6LoBVJy3tILmjbOD7k8E7hViK0ZyXZgQWzJSk= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski Make sure that features supported only by some of the data paths are not enabled for all. Add a mask of supported features into the data path op structure. Signed-off-by: Jakub Kicinski Signed-off-by: Fei Qin Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/nfd3/rings.c | 15 +++++++++++++++ .../net/ethernet/netronome/nfp/nfp_net_common.c | 3 +++ drivers/net/ethernet/netronome/nfp/nfp_net_dp.h | 2 ++ 3 files changed, 20 insertions(+) diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/rings.c b/drivers/net/ethernet/netronome/nfp/nfd3/rings.c index 3ebbedd8ba64..47604d5e25eb 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/rings.c +++ b/drivers/net/ethernet/netronome/nfp/nfd3/rings.c @@ -242,9 +242,24 @@ nfp_nfd3_print_tx_descs(struct seq_file *file, } } +#define NFP_NFD3_CFG_CTRL_SUPPORTED \ + (NFP_NET_CFG_CTRL_ENABLE | NFP_NET_CFG_CTRL_PROMISC | \ + NFP_NET_CFG_CTRL_L2BC | NFP_NET_CFG_CTRL_L2MC | \ + NFP_NET_CFG_CTRL_RXCSUM | NFP_NET_CFG_CTRL_TXCSUM | \ + NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_TXVLAN | \ + NFP_NET_CFG_CTRL_GATHER | NFP_NET_CFG_CTRL_LSO | \ + NFP_NET_CFG_CTRL_CTAG_FILTER | NFP_NET_CFG_CTRL_CMSG_DATA | \ + NFP_NET_CFG_CTRL_RINGCFG | NFP_NET_CFG_CTRL_RSS | \ + NFP_NET_CFG_CTRL_IRQMOD | NFP_NET_CFG_CTRL_TXRWB | \ + NFP_NET_CFG_CTRL_VXLAN | NFP_NET_CFG_CTRL_NVGRE | \ + NFP_NET_CFG_CTRL_BPF | NFP_NET_CFG_CTRL_LSO2 | \ + NFP_NET_CFG_CTRL_RSS2 | NFP_NET_CFG_CTRL_CSUM_COMPLETE | \ + NFP_NET_CFG_CTRL_LIVE_ADDR) + const struct nfp_dp_ops nfp_nfd3_ops = { .version = NFP_NFD_VER_NFD3, .tx_min_desc_per_pkt = 1, + .cap_mask = NFP_NFD3_CFG_CTRL_SUPPORTED, .poll = nfp_nfd3_poll, .xsk_poll = nfp_nfd3_xsk_poll, .ctrl_poll = nfp_nfd3_ctrl_poll, diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 5cac5563028c..331253149f50 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -2303,6 +2303,9 @@ static int nfp_net_read_caps(struct nfp_net *nn) nn->dp.rx_offset = NFP_NET_RX_OFFSET; } + /* Mask out NFD-version-specific features */ + nn->cap &= nn->dp.ops->cap_mask; + /* For control vNICs mask out the capabilities app doesn't want. */ if (!nn->dp.netdev) nn->cap &= nn->app->type->ctrl_cap_mask; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h index 99579722aacf..237ca1d9c886 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h @@ -115,6 +115,7 @@ enum nfp_nfd_version { * struct nfp_dp_ops - Hooks to wrap different implementation of different dp * @version: Indicate dp type * @tx_min_desc_per_pkt: Minimal TX descs needed for each packet + * @cap_mask: Mask of supported features * @poll: Napi poll for normal rx/tx * @xsk_poll: Napi poll when xsk is enabled * @ctrl_poll: Tasklet poll for ctrl rx/tx @@ -131,6 +132,7 @@ enum nfp_nfd_version { struct nfp_dp_ops { enum nfp_nfd_version version; unsigned int tx_min_desc_per_pkt; + u32 cap_mask; int (*poll)(struct napi_struct *napi, int budget); int (*xsk_poll)(struct napi_struct *napi, int budget); From patchwork Mon Mar 21 10:42:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787081 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7CC4C433EF for ; Mon, 21 Mar 2022 10:43:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346444AbiCUKoT (ORCPT ); Mon, 21 Mar 2022 06:44:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346441AbiCUKoO (ORCPT ); Mon, 21 Mar 2022 06:44:14 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7207A14A900 for ; Mon, 21 Mar 2022 03:42:42 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kcTTKh4l/1BC9b8XRr9x3YfUc4cJekuPvICcgPJbpdFsF0Ia4REe2cxjkZDnwg4zIqtn7Sk6WdNOZfHYjlE+Gzz7tS38p5sT28ub7qKBaAO4cf0J7qUeu79kUPpMwIZw4m04OJiAcA55pLkht54XaBavK7hKY77XlFrHReNyTm3iWL2W3kNXwieh3XcLc6yiaE+d8RCrhJlCUwvrYCwL17Pmp8/n5zXgOrLcl5W/WMBUzrQDT4w7Vb/rg3kMBbj2hK7TzgE+OqJASY0iwFxuHgQeuWIXXYeX26kJzv+GGnH3CfSRvTkqQGSK2qnDkiSZaA5sz1zVdI+jWmVKh/xj1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3+ESlUUxlHG0lg+nzcCtNuhuEB4rq/Xgfl8TDsmkiL8=; b=FHTLB2vyicLCiZIwfSh0YIbwGQ32yg9+thEeYKNmhsFOYZpzxbFyrjvRSBxFJHwEPHDhQO86T0kk1jZud3YFiC97VTwoQ5Bo4Br9D5K28W56LQ/QQ+L4olZqdR2R811RvT2Rp66aTdkr/Xz35nv/eomSLDXArwrFyj4EM2V92NQfXsUKxCLCI85YBFprkdDyphTq89RR2axPCaN6b05d1ys2JMreSYzqTwYfxkQbI3JBioZJ0uZEu3mNA0PQprC/1R39KjE68MZJ74oUdqo2QlFI5M+HPMUCrMGYAInLkjPVYjrVDyNhZh3AfqlFxcYRSyByo0MLPYn/64uInkbdRQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3+ESlUUxlHG0lg+nzcCtNuhuEB4rq/Xgfl8TDsmkiL8=; b=PEby+dezxg54EzVXGIrryzPF/a6Ok8HPZ1rbbDboep5SuLG6WAgfu1HP4kk2CI/iysYy8VOiSq5WhRVmK+wdbbtWG/lVxwIUz1vluMBiM1irBuSnRtotn/ew2fDd+vOqJOVknzbIWSFbied8YUxDOnEgM2WPnMvFyyCrbZOIhc8= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:37 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:37 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 08/10] nfp: choose data path based on version Date: Mon, 21 Mar 2022 11:42:07 +0100 Message-Id: <20220321104209.273535-9-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 52399ce6-b458-4d31-ff4f-08da0b2783dc X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bNjMEpurBpB4IrqiJPi3pyeNYPGBjRq3vI571K0xabz/LFzEDLgv0SCWP0uh1BohiTxconiB/K7MyResNZZJ2swzF+Tes8XSp+oS6OMPiBHgDYPuVvHkoCqhfvbCBDmkblzUloXyHs+y33MD3HpHcGABbNgA645xF9QlAH5Gg/ICJFqV+80jxDIaEJr51bHgxHfTvgGYNH6u5BneSZzaBm43poa3AsbHJC8JqhhqvAzGIpydLrQmgCgM+N5Q6rkPbVkupuB2ax9g+Xac8w3bAAFcgLn+Luea/828uS4MgCs3c2zzKt6Ep96ylVTi35JKgoER4fl0m/MxRRf384o8mUl+cz2MMRc+UTyjI2My3KaBS8KTqTdzL6EsP+gwggDjMLPrX0nFIvtA6pGP3uzAeXo256lZXJw9+TPs8YUvn8v+wT01i6wlXmZgO6dlIylduX4cwMlLChELELyb6eMzp6gsVU5h1JLutsZ0wOigsJe50uUa9mgHVShNRbpH/+GGsecqVLsDo7EfNJa1IsQsWJzqI04HkI8yQmyk46QqHmZ9gXvUdbc1jYWF9ZzWsOwz/y11MSVoRsJO5a0MnZMCCBNf8sHV05fzfVPkQmnzdv1STnhUWEmWtkYe1qat/kRmZTwOyoNxrzbA4Yvl3Pc4xg== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(83380400001)(508600001)(36756003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: ofDeTycd3GgYnHAaMED5uZWp1OwdVxnXdITio/aaLyYRI8bOMcYb8oScx3Yd5Y6IHnAQeyGr642CCptYHO8/v+KAj8p//iM9qA546YOFKVl0IUJnPXrfgak12fJvq5vGFNTXZeCAeRoAnyUoP4xm+iuO7lHDWzDXTw76wNPpTnZTq5MWxEeqvcIQotM6Q3DfMSNvESqHjr8cxFYi2M5VB7agyJDu+TtYdf2YcTFr9VdlkRL5boGBJk6wjOsnRaQIjB3XQprJHTKph9WChqEPJT91R4/CRZwQAxSGkZsypHsdQmUCkn4m+rPhlnkKOKJGV4SNap+p7pvH7nXCXTxLwXazgpKV/gH8UjzRPN1BXtIOolkFBilVFeAuBkpfHHTFP50mbGt0cQWJ+Ku4bAVomv9XSfNIU8TWTcVy1GpZddkcW8sNXrGRH1TdSti247VBhAPYTAyu0Mxg5L3bXLufVCiCgpVygLHZkIrZFxaXyk+MMAfS5wBWeDwuwBIxlQi1No08aTAFKrtA4ABgZlV+Ti66hB35a55DKERGj2seGQ46EpIZTi35ZOceRs0ykYK3zYhV+Vlfe9axX8s9qRgGY355mLvyRqw2z90FXkAlABrMPynTXMVkXVo6Tf3RumSY1fKNqr5ZI+aJbx9j9sfEOYrxgAr4aKhlcFcD/ROv22TX+vn2m/lI5fJkAjzqs+f/PLT2JHTFUOfIu5HUIM0gPrsaT4RpNAR4lBZ2+8pF0jc1ITNdxbcL9N88WvMKx7qEgkidUB1dXczSVjQnZmTL5YxJK9YA7vs1IiP5Z53OR8gvpDGmNfAgY3BbNnKo8kskZZBzT9itVvN3LRmuaHRgeVW1ICCSwXbpQJuiDK0wNpP2tgAm5/LsLISsCnc4ROJixL2JRzdDgpIKRKVwOBr3VkRdHKfwuXCo2tgb3lvG1qB4iPe4fo3Yo1MDJd1JFKRVFbXzoBipBCRQMAMG+oToqaLBgNs8+XXeI9M9BvbsZl3G04AqWjVYOAxruhu/Bc2C7tJMSjLrJlyXJuZST1KIpBsm3s4vky3Mp5nElaTdyWNnxJ5LjRC0XY8KRLBKV1Ko5C9GQCxHtbDisnZU2V3ASguz8qmCNVnpDJehIymHvvrECOZqyr8WcqdEBf9v/jN20zhgkMaNImFZ/k8FeG2Rn8PgYY6SydgkeVQYPEeB/h7yzI6o+q+mHv4yVWaro9M9AsZemQzwa9SsiX0hrIbYDCnJTvq1+7++J9B8H1CG9q09K88h4PoUbC9rbkCQ9quf7MfKHnavodY9hAmbcyb+mTPdXML/wJ1d/0qe6qAdbPvse0lQR8JixCaYC2MN+Q89X13ZIPklAFuEYBmkegIJQMMXKZh2usqY7lZCK4L1g+tUlgeSOTnV1ul01HqX8vhsvEFjgJZ8QqlqMbX6DWBwNhk4pj55JWawjGrTIpE5qWEnV9SMAOSNuerg5z08SArjO/3xxyKiZo4mXQSZ6nMxhmuaQQWXbLxYF8pla9Oxtx+wZTEDEHuf4jD3lfWdcd3g0pXhhOXOKyj/0ONriD9KmQ== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 52399ce6-b458-4d31-ff4f-08da0b2783dc X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:37.1212 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: r/jY3IpGbCCL+MeAk2PLkgZ2q6wQPgq/L0bYFSaejfJc9vxQxPI2Fqulg6HGf6ibjA8/zGuiGHVg6qnhFUPwkdhYInrgAklwZQ17iGgUjqs= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski Prepare for choosing data path based on the firmware version field. Exploit one bit from the reserved byte in the firmware version field as the data path type. We need the firmware version right after vNIC is allocated, so it has to be read inside nfp_net_alloc(), callers don't have to set it afterwards. Following patches will bring the implementation of the second data path. Signed-off-by: Jakub Kicinski Signed-off-by: Fei Qin Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/nfp_net.h | 14 +++++++----- .../ethernet/netronome/nfp/nfp_net_common.c | 22 +++++++++++++++---- .../net/ethernet/netronome/nfp/nfp_net_ctrl.h | 4 +++- .../ethernet/netronome/nfp/nfp_net_ethtool.c | 2 +- .../net/ethernet/netronome/nfp/nfp_net_main.c | 9 ++++---- .../ethernet/netronome/nfp/nfp_netvf_main.c | 9 ++++---- 6 files changed, 41 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index 3c386972f69a..e7646377de37 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -411,13 +411,17 @@ struct nfp_net_fw_version { u8 minor; u8 major; u8 class; - u8 resv; + + /* This byte can be exploited for more use, currently, + * BIT0: dp type, BIT[7:1]: reserved + */ + u8 extend; } __packed; static inline bool nfp_net_fw_ver_eq(struct nfp_net_fw_version *fw_ver, - u8 resv, u8 class, u8 major, u8 minor) + u8 extend, u8 class, u8 major, u8 minor) { - return fw_ver->resv == resv && + return fw_ver->extend == extend && fw_ver->class == class && fw_ver->major == major && fw_ver->minor == minor; @@ -855,11 +859,11 @@ static inline void nn_ctrl_bar_unlock(struct nfp_net *nn) /* Globals */ extern const char nfp_driver_version[]; -extern const struct net_device_ops nfp_net_netdev_ops; +extern const struct net_device_ops nfp_nfd3_netdev_ops; static inline bool nfp_netdev_is_nfp_net(struct net_device *netdev) { - return netdev->netdev_ops == &nfp_net_netdev_ops; + return netdev->netdev_ops == &nfp_nfd3_netdev_ops; } static inline int nfp_net_coalesce_para_check(u32 usecs, u32 pkts) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 331253149f50..0aa91065a7cb 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -1892,7 +1892,7 @@ static int nfp_net_set_mac_address(struct net_device *netdev, void *addr) return 0; } -const struct net_device_ops nfp_net_netdev_ops = { +const struct net_device_ops nfp_nfd3_netdev_ops = { .ndo_init = nfp_app_ndo_init, .ndo_uninit = nfp_app_ndo_uninit, .ndo_open = nfp_net_netdev_open, @@ -1962,7 +1962,7 @@ void nfp_net_info(struct nfp_net *nn) nn->dp.num_tx_rings, nn->max_tx_rings, nn->dp.num_rx_rings, nn->max_rx_rings); nn_info(nn, "VER: %d.%d.%d.%d, Maximum supported MTU: %d\n", - nn->fw_ver.resv, nn->fw_ver.class, + nn->fw_ver.extend, nn->fw_ver.class, nn->fw_ver.major, nn->fw_ver.minor, nn->max_mtu); nn_info(nn, "CAP: %#x %s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n", @@ -2036,7 +2036,16 @@ nfp_net_alloc(struct pci_dev *pdev, const struct nfp_dev_info *dev_info, nn->dp.ctrl_bar = ctrl_bar; nn->dev_info = dev_info; nn->pdev = pdev; - nn->dp.ops = &nfp_nfd3_ops; + nfp_net_get_fw_version(&nn->fw_ver, ctrl_bar); + + switch (FIELD_GET(NFP_NET_CFG_VERSION_DP_MASK, nn->fw_ver.extend)) { + case NFP_NET_CFG_VERSION_DP_NFD3: + nn->dp.ops = &nfp_nfd3_ops; + break; + default: + err = -EINVAL; + goto err_free_nn; + } nn->max_tx_rings = max_tx_rings; nn->max_rx_rings = max_rx_rings; @@ -2255,7 +2264,12 @@ static void nfp_net_netdev_init(struct nfp_net *nn) nn->dp.ctrl &= ~NFP_NET_CFG_CTRL_LSO_ANY; /* Finalise the netdev setup */ - netdev->netdev_ops = &nfp_net_netdev_ops; + switch (nn->dp.ops->version) { + case NFP_NFD_VER_NFD3: + netdev->netdev_ops = &nfp_nfd3_netdev_ops; + break; + } + netdev->watchdog_timeo = msecs_to_jiffies(5 * 1000); /* MTU range: 68 - hw-specific max */ diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index 33fd32478905..7f04a5275a2d 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -149,7 +149,9 @@ * - define more STS bits */ #define NFP_NET_CFG_VERSION 0x0030 -#define NFP_NET_CFG_VERSION_RESERVED_MASK (0xff << 24) +#define NFP_NET_CFG_VERSION_RESERVED_MASK (0xfe << 24) +#define NFP_NET_CFG_VERSION_DP_NFD3 0 +#define NFP_NET_CFG_VERSION_DP_MASK 1 #define NFP_NET_CFG_VERSION_CLASS_MASK (0xff << 16) #define NFP_NET_CFG_VERSION_CLASS(x) (((x) & 0xff) << 16) #define NFP_NET_CFG_VERSION_CLASS_GENERIC 0 diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c index 7d7150600485..61c8b450aafb 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c @@ -219,7 +219,7 @@ nfp_net_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo) struct nfp_net *nn = netdev_priv(netdev); snprintf(vnic_version, sizeof(vnic_version), "%d.%d.%d.%d", - nn->fw_ver.resv, nn->fw_ver.class, + nn->fw_ver.extend, nn->fw_ver.class, nn->fw_ver.major, nn->fw_ver.minor); strlcpy(drvinfo->bus_info, pci_name(nn->pdev), sizeof(drvinfo->bus_info)); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c index 09a0a2076c6e..ca4e05650fe6 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_main.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_main.c @@ -123,7 +123,6 @@ nfp_net_pf_alloc_vnic(struct nfp_pf *pf, bool needs_netdev, return nn; nn->app = pf->app; - nfp_net_get_fw_version(&nn->fw_ver, ctrl_bar); nn->tx_bar = qc_bar + tx_base * NFP_QCP_QUEUE_ADDR_SZ; nn->rx_bar = qc_bar + rx_base * NFP_QCP_QUEUE_ADDR_SZ; nn->dp.is_vf = 0; @@ -679,9 +678,11 @@ int nfp_net_pci_probe(struct nfp_pf *pf) } nfp_net_get_fw_version(&fw_ver, ctrl_bar); - if (fw_ver.resv || fw_ver.class != NFP_NET_CFG_VERSION_CLASS_GENERIC) { + if (fw_ver.extend & NFP_NET_CFG_VERSION_RESERVED_MASK || + fw_ver.class != NFP_NET_CFG_VERSION_CLASS_GENERIC) { nfp_err(pf->cpp, "Unknown Firmware ABI %d.%d.%d.%d\n", - fw_ver.resv, fw_ver.class, fw_ver.major, fw_ver.minor); + fw_ver.extend, fw_ver.class, + fw_ver.major, fw_ver.minor); err = -EINVAL; goto err_unmap; } @@ -697,7 +698,7 @@ int nfp_net_pci_probe(struct nfp_pf *pf) break; default: nfp_err(pf->cpp, "Unsupported Firmware ABI %d.%d.%d.%d\n", - fw_ver.resv, fw_ver.class, + fw_ver.extend, fw_ver.class, fw_ver.major, fw_ver.minor); err = -EINVAL; goto err_unmap; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_netvf_main.c b/drivers/net/ethernet/netronome/nfp/nfp_netvf_main.c index 9ef226c6706e..a51eb26dd977 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_netvf_main.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_netvf_main.c @@ -122,9 +122,11 @@ static int nfp_netvf_pci_probe(struct pci_dev *pdev, } nfp_net_get_fw_version(&fw_ver, ctrl_bar); - if (fw_ver.resv || fw_ver.class != NFP_NET_CFG_VERSION_CLASS_GENERIC) { + if (fw_ver.extend & NFP_NET_CFG_VERSION_RESERVED_MASK || + fw_ver.class != NFP_NET_CFG_VERSION_CLASS_GENERIC) { dev_err(&pdev->dev, "Unknown Firmware ABI %d.%d.%d.%d\n", - fw_ver.resv, fw_ver.class, fw_ver.major, fw_ver.minor); + fw_ver.extend, fw_ver.class, + fw_ver.major, fw_ver.minor); err = -EINVAL; goto err_ctrl_unmap; } @@ -144,7 +146,7 @@ static int nfp_netvf_pci_probe(struct pci_dev *pdev, break; default: dev_err(&pdev->dev, "Unsupported Firmware ABI %d.%d.%d.%d\n", - fw_ver.resv, fw_ver.class, + fw_ver.extend, fw_ver.class, fw_ver.major, fw_ver.minor); err = -EINVAL; goto err_ctrl_unmap; @@ -186,7 +188,6 @@ static int nfp_netvf_pci_probe(struct pci_dev *pdev, } vf->nn = nn; - nn->fw_ver = fw_ver; nn->dp.is_vf = 1; nn->stride_tx = stride; nn->stride_rx = stride; From patchwork Mon Mar 21 10:42:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787084 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28FA6C433EF for ; Mon, 21 Mar 2022 10:43:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346450AbiCUKoa (ORCPT ); Mon, 21 Mar 2022 06:44:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346454AbiCUKoP (ORCPT ); Mon, 21 Mar 2022 06:44:15 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2131.outbound.protection.outlook.com [40.107.243.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38D1814A934 for ; Mon, 21 Mar 2022 03:42:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IipFA3FR86qkHTSmV27MW9e4AacB9ofGJhfw+kCV6lEuTUoQiGUUfdqUBb7IsGaoSYLi9pvQShR9RF8FALpYOnjATt+7THaZ8gd5KktdkTXUh0O0jx0QO+iK1igUbgJPTTnWahu1tHfgjUnKg6dBPAjXRuCuBXXw+wvmnFhQnmOHqowf4Dlbxojnf/YoQXJN255ROJcGMwt7kSLL1MQ7YFdOB7lyDg6G6sbu58g674sZoJnQ3dzFNmIDHpnVvj4A5yCstvOyX7iZmNAIXZW0rl9akMTy7XhJBIB6p2zCDMZOvZld9uYzPmb1xxCZf5rtmmlFJYiHo9r8Jd0i1Vx7YQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nQoFRJdAOC4cXu/Jd8ySq7TpTh9fzXoy5BzdD2OKtKE=; b=JzvqdOJNPRBQOIkFzUqx1x5S9OpgeOs3PzuQ4czLYYbLweU54YNr+rv/2ZzklejqzNhH7OFQ3ZZqTPtQLm6g2W97tq9Cn3Tseu2lEUNaQt/Wunoyta8tF+lDP/B+ChJITghIMJ4VlLvw/xTbe8JutqxSLOEyK5qSdzXM96WS0PQZAwi+mibLX7wzk0s6xPA21gXAgcZWSYNYURHGN6n2REtaHrZwMcb9k393QDnzZ2ANW4YwOp3NH5Lu9apUdE1aSm+kOKNuWLoFdbMDL9HI2UKeSkHCnEG3h2sLSa0eKWjNnhChecabFCGrHtjm+wJCCoBFr7FgJ1fbeD/2v6fG7w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nQoFRJdAOC4cXu/Jd8ySq7TpTh9fzXoy5BzdD2OKtKE=; b=KOhAcI0Mkunx0QTS5pIfEV6i9tkNxwMcnra2b4L3JppPiqJSR1vThnqeifSaEKb+0GVbRRgrSSystqqZBhb1MoNgNBtfDHvCO7FvwCCgjRckTTafbSa/1pB9WaXVfWwgckl4UbKleR6mPNmqOfMonbwYF5KRf7Ky2jnJTbyzfzc= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:38 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:38 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 09/10] nfp: add support for NFDK data path Date: Mon, 21 Mar 2022 11:42:08 +0100 Message-Id: <20220321104209.273535-10-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4556c3f8-306d-4d14-f238-08da0b2784ae X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sOn3XPUrcsqn7GEuouLo0lU+Ccr4kKoPqqqYcGKmWxaBQ2PkMMyhfq+iIxAgqu3LTyjYJ042vj0tS3TQAS0C5hQ1CyubwSrA27RFdw6rboMoS2+dWja+XbpBgFToa/a+yXSIwI7ceHrY45lrxYTYfvlx7kvxK9CnaVhmdH1VNSs+WHBIVcQ0GLl0C7NUAQJDGklJcwh44d3sdcYZL0CJS5lgTOzqn0VYnE5ElyP/LPYxsjsQprzGxyfbZRJec0pkyyUdDNORfkEPWUHDl7l1kl6ao5bg3Ibm6eGuHe0xKP/PwQSGwgXWaWV1YoLFwKtQE+OhlH5zdpa0TalDRHeSgo67B2DZ9LeCQ+CNJgZ4nZPzOMYnCJyBKFXupDyGGUcwaBcwGGycOeX0+Y7tZc56q8wYdtfQ3HCCAwTrt0vngvo/82CuIKiZoUNdldOHXZ68dKezHf84A4T6tonq3rHRjSg14uyruMns35zFR/yg2+VbIQ9MuHQGh+1zHcPYHe3zny5gf0R/28MHZwiKzGMNtFO870yTXN6Gz7Fz5rY40iR3OVYYd+4it3gFvzhxZ36H5S0XnhwoL/3aoOq+1/VGzea6y99qiqkBsetxGTt7KB99PLvexQzgOyb/XAzL1OUUWPrHfsklzxEfwLRrGpUkwVu3uGgnQd+0qSUErZyKyWmg0sRTNzJUYYQy3iS4U3Tk0tcd61gGAsTP7hK6+qobbQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(30864003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(66574015)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(83380400001)(508600001)(36756003)(579004)(559001)(309714004);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: EKImKkzreYSJTLiQzkc+q4McB9OIvtCjrxJWAAbYrkTozupPisq+TKZLK3rNAKVupJq9M5nGZH+P88d8HOKAujSYcSx8ndpy2c6M55zXWwHL9afJaVemTgnXwPSBJ0WzneOIh3h1sOuEqHleApKgU4sd4Fv49orAJAdzpeq0evd+tR4gm0qyNKh5uM0TxwqWaaZPvX39QmB6kR1jVEs+lMcmuuZByS9L+xjf9i8ynN8L9aOy120jHqcjHUcLH/DiBkA40VvS7Rpbxwc+zwDKuek++krhfnBwNfvaihgOfxXgpBZPm+C4nIhcLsY4u+bVgZ2CvWnzsndFvzUsjgSbcNtVYzkFEQDptHlFtIqa8CgZGcX9yVz5RAL+1syLqc7NBWSLjXZCF7BTV2l4vDjo5gBl7mIkQdQSM4N1Y1xuEwFWYJX5La2G3231/U5fiCWe1MjyyXSf26r3p0XR1nMOinq3YK66F9iuP6CEnMcuvUEDptRHSZUw4QgGRReS4fHny2zS9erjZ37UtNRXivS0WuXbM51ZxqLK2JLyuweMlvr9sWHiNpSmOp5CcXAKhy3ucO+gP+dnAGhW5GK1w2HKrK97Mq3T8maWi9AxWJn28T6HfYigX/MEC5UQpVLkRPLMqrQ1pB/EmG9H4CcFIaokraeINU5SfD5+n3OLaT+nWiJJh7v0t4c+xXl+w7Nl4JwQQhtjBvtqi4nYJAQTaIfgh1RKTo3MZOjb7dxljW5JVosS5zqwoPRDbRy8NBnaieALCPweTtCO088jnOma+XopuQWR5XFytJHbEmnz7P22soMHi9UOx9qiTJZt6DMAK+Q6wDAG3q84G7XE6/QzInfCoggmUmglyFbKoVI+i8jPV+bFNK3rYofaCPh2ZRhgwmw7c00HOB12MoNHaZouxaQFaTdgeHtgK/F70rlKjbsTCJc+5Ga++02rbN2Y5894xt2iLraKTewR4c5MnWXTG75K9HxJasXSsUXGa5J/wpoGMDMHkW1Ulb2WSXE7dWmp4U1DBz5xRZ9+SJW4gLMO+iG7+LhNiWgEkW4UNcJTbYmqCE4nkqbsJzVt7zsjhNUpo3DvQRrTQMqbzxdBwuinB9Hly/Nfwp/fyOdCHPuReNKcMWFDwQxFnpeBMeeYK21YfU8G9LgVtHgk4MiwlaKlaDFe1Ffc/wnXP+fZzBrDLy7nya/1IYx1htT2w2xmGbZPLehQVqx8PzeuEJTFpxsMihwsABPbCaEp7jKtUQt2UZRHsYmtLgjy9PZWO2gF+3pOivAMkdPbwlTH/vOSZoNBqpm4nPj67ckk/FvMAKSpzRVE899FiTMiaXwNp9V2eRMKkPXCKPGiQzQhEXXjntbZ1EQgBjsi16KK/fSpWGXjlWi2mJVgIgpsSWV+lSTZ18ucwwgjDF2GCredcZnBR1UsHgzdPLHhW03yIEMFtR+mvYP4sAhjfE75sm6EDWYaFsOWlLC0vmzqxqGuhjQLMP0SsjOK6S8Pkdxhb1DSt+zHiNraHK5yj09L0XvAI7P+X6gVRU5uhlLQKcn0/slCZlWil1G2IA== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4556c3f8-306d-4d14-f238-08da0b2784ae X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:38.6533 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: u8iTlpuGet1hb2TOlx53UpQ6SS1ZmdX+sg2sSSHvdMk4IKq/DO9vycOIH9mPO7t8Xo8BagYcgqQ015ZYtUxm7O99rlQLBbyi/8vQUlx12eg= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Jakub Kicinski Add new data path. The TX is completely different, each packet has multiple descriptor entries (between 2 and 32). TX ring is divided into blocks 32 descriptor, and descritors of one packet can't cross block bounds. The RX side is the same for now. ABI version 5 or later is required. There is no support for VLAN insertion on TX. XDP_TX action and AF_XDP zero-copy is not implemented in NFDK path. Changes to Jakub's work: * Move statistics of hw_csum_tx after jumbo packet's segmentation. * Set L3_CSUM flag to enable recaculating of L3 header checksum in ipv4 case. * Mark the case of TSO a packet with metadata prepended as unsupported. Signed-off-by: Jakub Kicinski Signed-off-by: Xingfeng Hu Signed-off-by: Yinjun Zhang Signed-off-by: Dianchao Wang Signed-off-by: Fei Qin Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/Makefile | 2 + drivers/net/ethernet/netronome/nfp/nfdk/dp.c | 1338 +++++++++++++++++ .../net/ethernet/netronome/nfp/nfdk/nfdk.h | 112 ++ .../net/ethernet/netronome/nfp/nfdk/rings.c | 195 +++ drivers/net/ethernet/netronome/nfp/nfp_net.h | 27 +- .../ethernet/netronome/nfp/nfp_net_common.c | 40 + .../net/ethernet/netronome/nfp/nfp_net_ctrl.h | 1 + .../net/ethernet/netronome/nfp/nfp_net_dp.h | 2 + .../net/ethernet/netronome/nfp/nfp_net_xsk.c | 4 + 9 files changed, 1715 insertions(+), 6 deletions(-) create mode 100644 drivers/net/ethernet/netronome/nfp/nfdk/dp.c create mode 100644 drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h create mode 100644 drivers/net/ethernet/netronome/nfp/nfdk/rings.c diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile index 69168c03606f..9c0861d03634 100644 --- a/drivers/net/ethernet/netronome/nfp/Makefile +++ b/drivers/net/ethernet/netronome/nfp/Makefile @@ -23,6 +23,8 @@ nfp-objs := \ nfd3/dp.o \ nfd3/rings.o \ nfd3/xsk.o \ + nfdk/dp.o \ + nfdk/rings.o \ nfp_app.o \ nfp_app_nic.o \ nfp_devlink.o \ diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c new file mode 100644 index 000000000000..f03de6b7988b --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c @@ -0,0 +1,1338 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2015-2019 Netronome Systems, Inc. */ + +#include +#include +#include +#include +#include + +#include "../nfp_app.h" +#include "../nfp_net.h" +#include "../nfp_net_dp.h" +#include "../crypto/crypto.h" +#include "../crypto/fw.h" +#include "nfdk.h" + +static int nfp_nfdk_tx_ring_should_wake(struct nfp_net_tx_ring *tx_ring) +{ + return !nfp_net_tx_full(tx_ring, NFDK_TX_DESC_STOP_CNT * 2); +} + +static int nfp_nfdk_tx_ring_should_stop(struct nfp_net_tx_ring *tx_ring) +{ + return nfp_net_tx_full(tx_ring, NFDK_TX_DESC_STOP_CNT); +} + +static void nfp_nfdk_tx_ring_stop(struct netdev_queue *nd_q, + struct nfp_net_tx_ring *tx_ring) +{ + netif_tx_stop_queue(nd_q); + + /* We can race with the TX completion out of NAPI so recheck */ + smp_mb(); + if (unlikely(nfp_nfdk_tx_ring_should_wake(tx_ring))) + netif_tx_start_queue(nd_q); +} + +static __le64 +nfp_nfdk_tx_tso(struct nfp_net_r_vector *r_vec, struct nfp_nfdk_tx_buf *txbuf, + struct sk_buff *skb) +{ + u32 segs, hdrlen, l3_offset, l4_offset; + struct nfp_nfdk_tx_desc txd; + u16 mss; + + if (!skb->encapsulation) { + l3_offset = skb_network_offset(skb); + l4_offset = skb_transport_offset(skb); + hdrlen = skb_transport_offset(skb) + tcp_hdrlen(skb); + } else { + l3_offset = skb_inner_network_offset(skb); + l4_offset = skb_inner_transport_offset(skb); + hdrlen = skb_inner_transport_header(skb) - skb->data + + inner_tcp_hdrlen(skb); + } + + segs = skb_shinfo(skb)->gso_segs; + mss = skb_shinfo(skb)->gso_size & NFDK_DESC_TX_MSS_MASK; + + /* Note: TSO of the packet with metadata prepended to skb is not + * supported yet, in which case l3/l4_offset and lso_hdrlen need + * be correctly handled here. + * Concern: + * The driver doesn't have md_bytes easily available at this point. + * The PCI.IN PD ME won't have md_bytes bytes to add to lso_hdrlen, + * so it needs the full length there. The app MEs might prefer + * l3_offset and l4_offset relative to the start of packet data, + * but could probably cope with it being relative to the CTM buf + * data offset. + */ + txd.l3_offset = l3_offset; + txd.l4_offset = l4_offset; + txd.lso_meta_res = 0; + txd.mss = cpu_to_le16(mss); + txd.lso_hdrlen = hdrlen; + txd.lso_totsegs = segs; + + txbuf->pkt_cnt = segs; + txbuf->real_len = skb->len + hdrlen * (txbuf->pkt_cnt - 1); + + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_lso++; + u64_stats_update_end(&r_vec->tx_sync); + + return txd.raw; +} + +static u8 +nfp_nfdk_tx_csum(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + unsigned int pkt_cnt, struct sk_buff *skb, u64 flags) +{ + struct ipv6hdr *ipv6h; + struct iphdr *iph; + + if (!(dp->ctrl & NFP_NET_CFG_CTRL_TXCSUM)) + return flags; + + if (skb->ip_summed != CHECKSUM_PARTIAL) + return flags; + + flags |= NFDK_DESC_TX_L4_CSUM; + + iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb); + ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb) : ipv6_hdr(skb); + + /* L3 checksum offloading flag is not required for ipv6 */ + if (iph->version == 4) { + flags |= NFDK_DESC_TX_L3_CSUM; + } else if (ipv6h->version != 6) { + nn_dp_warn(dp, "partial checksum but ipv=%x!\n", iph->version); + return flags; + } + + u64_stats_update_begin(&r_vec->tx_sync); + if (!skb->encapsulation) { + r_vec->hw_csum_tx += pkt_cnt; + } else { + flags |= NFDK_DESC_TX_ENCAP; + r_vec->hw_csum_tx_inner += pkt_cnt; + } + u64_stats_update_end(&r_vec->tx_sync); + + return flags; +} + +static int +nfp_nfdk_tx_maybe_close_block(struct nfp_net_tx_ring *tx_ring, + unsigned int nr_frags, struct sk_buff *skb) +{ + unsigned int n_descs, wr_p, nop_slots; + const skb_frag_t *frag, *fend; + struct nfp_nfdk_tx_desc *txd; + unsigned int wr_idx; + int err; + +recount_descs: + n_descs = nfp_nfdk_headlen_to_segs(skb_headlen(skb)); + + frag = skb_shinfo(skb)->frags; + fend = frag + nr_frags; + for (; frag < fend; frag++) + n_descs += DIV_ROUND_UP(skb_frag_size(frag), + NFDK_TX_MAX_DATA_PER_DESC); + + if (unlikely(n_descs > NFDK_TX_DESC_GATHER_MAX)) { + if (skb_is_nonlinear(skb)) { + err = skb_linearize(skb); + if (err) + return err; + goto recount_descs; + } + return -EINVAL; + } + + /* Under count by 1 (don't count meta) for the round down to work out */ + n_descs += !!skb_is_gso(skb); + + if (round_down(tx_ring->wr_p, NFDK_TX_DESC_BLOCK_CNT) != + round_down(tx_ring->wr_p + n_descs, NFDK_TX_DESC_BLOCK_CNT)) + goto close_block; + + if ((u32)tx_ring->data_pending + skb->len > NFDK_TX_MAX_DATA_PER_BLOCK) + goto close_block; + + return 0; + +close_block: + wr_p = tx_ring->wr_p; + nop_slots = D_BLOCK_CPL(wr_p); + + wr_idx = D_IDX(tx_ring, wr_p); + tx_ring->ktxbufs[wr_idx].skb = NULL; + txd = &tx_ring->ktxds[wr_idx]; + + memset(txd, 0, array_size(nop_slots, sizeof(struct nfp_nfdk_tx_desc))); + + tx_ring->data_pending = 0; + tx_ring->wr_p += nop_slots; + tx_ring->wr_ptr_add += nop_slots; + + return 0; +} + +static int nfp_nfdk_prep_port_id(struct sk_buff *skb) +{ + struct metadata_dst *md_dst = skb_metadata_dst(skb); + unsigned char *data; + + if (likely(!md_dst)) + return 0; + if (unlikely(md_dst->type != METADATA_HW_PORT_MUX)) + return 0; + + /* Note: Unsupported case when TSO a skb with metedata prepended. + * See the comments in `nfp_nfdk_tx_tso` for details. + */ + if (unlikely(md_dst && skb_is_gso(skb))) + return -EOPNOTSUPP; + + if (unlikely(skb_cow_head(skb, sizeof(md_dst->u.port_info.port_id)))) + return -ENOMEM; + + data = skb_push(skb, sizeof(md_dst->u.port_info.port_id)); + put_unaligned_be32(md_dst->u.port_info.port_id, data); + + return sizeof(md_dst->u.port_info.port_id); +} + +static int +nfp_nfdk_prep_tx_meta(struct nfp_app *app, struct sk_buff *skb, + struct nfp_net_r_vector *r_vec) +{ + unsigned char *data; + int res, md_bytes; + u32 meta_id = 0; + + res = nfp_nfdk_prep_port_id(skb); + if (unlikely(res <= 0)) + return res; + + md_bytes = res; + meta_id = NFP_NET_META_PORTID; + + if (unlikely(skb_cow_head(skb, sizeof(meta_id)))) + return -ENOMEM; + + md_bytes += sizeof(meta_id); + + meta_id = FIELD_PREP(NFDK_META_LEN, md_bytes) | + FIELD_PREP(NFDK_META_FIELDS, meta_id); + + data = skb_push(skb, sizeof(meta_id)); + put_unaligned_be32(meta_id, data); + + return NFDK_DESC_TX_CHAIN_META; +} + +/** + * nfp_nfdk_tx() - Main transmit entry point + * @skb: SKB to transmit + * @netdev: netdev structure + * + * Return: NETDEV_TX_OK on success. + */ +netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev) +{ + struct nfp_net *nn = netdev_priv(netdev); + struct nfp_nfdk_tx_buf *txbuf, *etxbuf; + u32 cnt, tmp_dlen, dlen_type = 0; + struct nfp_net_tx_ring *tx_ring; + struct nfp_net_r_vector *r_vec; + const skb_frag_t *frag, *fend; + struct nfp_nfdk_tx_desc *txd; + unsigned int real_len, qidx; + unsigned int dma_len, type; + struct netdev_queue *nd_q; + struct nfp_net_dp *dp; + int nr_frags, wr_idx; + dma_addr_t dma_addr; + u64 metadata; + + dp = &nn->dp; + qidx = skb_get_queue_mapping(skb); + tx_ring = &dp->tx_rings[qidx]; + r_vec = tx_ring->r_vec; + nd_q = netdev_get_tx_queue(dp->netdev, qidx); + + /* Don't bother counting frags, assume the worst */ + if (unlikely(nfp_net_tx_full(tx_ring, NFDK_TX_DESC_STOP_CNT))) { + nn_dp_warn(dp, "TX ring %d busy. wrp=%u rdp=%u\n", + qidx, tx_ring->wr_p, tx_ring->rd_p); + netif_tx_stop_queue(nd_q); + nfp_net_tx_xmit_more_flush(tx_ring); + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_busy++; + u64_stats_update_end(&r_vec->tx_sync); + return NETDEV_TX_BUSY; + } + + metadata = nfp_nfdk_prep_tx_meta(nn->app, skb, r_vec); + if (unlikely((int)metadata < 0)) + goto err_flush; + + nr_frags = skb_shinfo(skb)->nr_frags; + if (nfp_nfdk_tx_maybe_close_block(tx_ring, nr_frags, skb)) + goto err_flush; + + /* DMA map all */ + wr_idx = D_IDX(tx_ring, tx_ring->wr_p); + txd = &tx_ring->ktxds[wr_idx]; + txbuf = &tx_ring->ktxbufs[wr_idx]; + + dma_len = skb_headlen(skb); + if (skb_is_gso(skb)) + type = NFDK_DESC_TX_TYPE_TSO; + else if (!nr_frags && dma_len < NFDK_TX_MAX_DATA_PER_HEAD) + type = NFDK_DESC_TX_TYPE_SIMPLE; + else + type = NFDK_DESC_TX_TYPE_GATHER; + + dma_addr = dma_map_single(dp->dev, skb->data, dma_len, DMA_TO_DEVICE); + if (dma_mapping_error(dp->dev, dma_addr)) + goto err_warn_dma; + + txbuf->skb = skb; + txbuf++; + + txbuf->dma_addr = dma_addr; + txbuf++; + + /* FIELD_PREP() implicitly truncates to chunk */ + dma_len -= 1; + dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN_HEAD, dma_len) | + FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type); + + txd->dma_len_type = cpu_to_le16(dlen_type); + nfp_desc_set_dma_addr(txd, dma_addr); + + /* starts at bit 0 */ + BUILD_BUG_ON(!(NFDK_DESC_TX_DMA_LEN_HEAD & 1)); + + /* Preserve the original dlen_type, this way below the EOP logic + * can use dlen_type. + */ + tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD; + dma_len -= tmp_dlen; + dma_addr += tmp_dlen + 1; + txd++; + + /* The rest of the data (if any) will be in larger dma descritors + * and is handled with the fragment loop. + */ + frag = skb_shinfo(skb)->frags; + fend = frag + nr_frags; + + while (true) { + while (dma_len > 0) { + dma_len -= 1; + dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len); + + txd->dma_len_type = cpu_to_le16(dlen_type); + nfp_desc_set_dma_addr(txd, dma_addr); + + dma_len -= dlen_type; + dma_addr += dlen_type + 1; + txd++; + } + + if (frag >= fend) + break; + + dma_len = skb_frag_size(frag); + dma_addr = skb_frag_dma_map(dp->dev, frag, 0, dma_len, + DMA_TO_DEVICE); + if (dma_mapping_error(dp->dev, dma_addr)) + goto err_unmap; + + txbuf->dma_addr = dma_addr; + txbuf++; + + frag++; + } + + (txd - 1)->dma_len_type = cpu_to_le16(dlen_type | NFDK_DESC_TX_EOP); + + if (!skb_is_gso(skb)) { + real_len = skb->len; + /* Metadata desc */ + metadata = nfp_nfdk_tx_csum(dp, r_vec, 1, skb, metadata); + txd->raw = cpu_to_le64(metadata); + txd++; + } else { + /* lso desc should be placed after metadata desc */ + (txd + 1)->raw = nfp_nfdk_tx_tso(r_vec, txbuf, skb); + real_len = txbuf->real_len; + /* Metadata desc */ + metadata = nfp_nfdk_tx_csum(dp, r_vec, txbuf->pkt_cnt, skb, metadata); + txd->raw = cpu_to_le64(metadata); + txd += 2; + txbuf++; + } + + cnt = txd - tx_ring->ktxds - wr_idx; + if (unlikely(round_down(wr_idx, NFDK_TX_DESC_BLOCK_CNT) != + round_down(wr_idx + cnt - 1, NFDK_TX_DESC_BLOCK_CNT))) + goto err_warn_overflow; + + skb_tx_timestamp(skb); + + tx_ring->wr_p += cnt; + if (tx_ring->wr_p % NFDK_TX_DESC_BLOCK_CNT) + tx_ring->data_pending += skb->len; + else + tx_ring->data_pending = 0; + + if (nfp_nfdk_tx_ring_should_stop(tx_ring)) + nfp_nfdk_tx_ring_stop(nd_q, tx_ring); + + tx_ring->wr_ptr_add += cnt; + if (__netdev_tx_sent_queue(nd_q, real_len, netdev_xmit_more())) + nfp_net_tx_xmit_more_flush(tx_ring); + + return NETDEV_TX_OK; + +err_warn_overflow: + WARN_ONCE(1, "unable to fit packet into a descriptor wr_idx:%d head:%d frags:%d cnt:%d", + wr_idx, skb_headlen(skb), nr_frags, cnt); + if (skb_is_gso(skb)) + txbuf--; +err_unmap: + /* txbuf pointed to the next-to-use */ + etxbuf = txbuf; + /* first txbuf holds the skb */ + txbuf = &tx_ring->ktxbufs[wr_idx + 1]; + if (txbuf < etxbuf) { + dma_unmap_single(dp->dev, txbuf->dma_addr, + skb_headlen(skb), DMA_TO_DEVICE); + txbuf->raw = 0; + txbuf++; + } + frag = skb_shinfo(skb)->frags; + while (etxbuf < txbuf) { + dma_unmap_page(dp->dev, txbuf->dma_addr, + skb_frag_size(frag), DMA_TO_DEVICE); + txbuf->raw = 0; + frag++; + txbuf++; + } +err_warn_dma: + nn_dp_warn(dp, "Failed to map DMA TX buffer\n"); +err_flush: + nfp_net_tx_xmit_more_flush(tx_ring); + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_errors++; + u64_stats_update_end(&r_vec->tx_sync); + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; +} + +/** + * nfp_nfdk_tx_complete() - Handled completed TX packets + * @tx_ring: TX ring structure + * @budget: NAPI budget (only used as bool to determine if in NAPI context) + */ +static void nfp_nfdk_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + u32 done_pkts = 0, done_bytes = 0; + struct nfp_nfdk_tx_buf *ktxbufs; + struct device *dev = dp->dev; + struct netdev_queue *nd_q; + u32 rd_p, qcp_rd_p; + int todo; + + rd_p = tx_ring->rd_p; + if (tx_ring->wr_p == rd_p) + return; + + /* Work out how many descriptors have been transmitted */ + qcp_rd_p = nfp_net_read_tx_cmpl(tx_ring, dp); + + if (qcp_rd_p == tx_ring->qcp_rd_p) + return; + + todo = D_IDX(tx_ring, qcp_rd_p - tx_ring->qcp_rd_p); + ktxbufs = tx_ring->ktxbufs; + + while (todo > 0) { + const skb_frag_t *frag, *fend; + unsigned int size, n_descs = 1; + struct nfp_nfdk_tx_buf *txbuf; + struct sk_buff *skb; + + txbuf = &ktxbufs[D_IDX(tx_ring, rd_p)]; + skb = txbuf->skb; + txbuf++; + + /* Closed block */ + if (!skb) { + n_descs = D_BLOCK_CPL(rd_p); + goto next; + } + + /* Unmap head */ + size = skb_headlen(skb); + n_descs += nfp_nfdk_headlen_to_segs(size); + dma_unmap_single(dev, txbuf->dma_addr, size, DMA_TO_DEVICE); + txbuf++; + + /* Unmap frags */ + frag = skb_shinfo(skb)->frags; + fend = frag + skb_shinfo(skb)->nr_frags; + for (; frag < fend; frag++) { + size = skb_frag_size(frag); + n_descs += DIV_ROUND_UP(size, + NFDK_TX_MAX_DATA_PER_DESC); + dma_unmap_page(dev, txbuf->dma_addr, + skb_frag_size(frag), DMA_TO_DEVICE); + txbuf++; + } + + if (!skb_is_gso(skb)) { + done_bytes += skb->len; + done_pkts++; + } else { + done_bytes += txbuf->real_len; + done_pkts += txbuf->pkt_cnt; + n_descs++; + } + + napi_consume_skb(skb, budget); +next: + rd_p += n_descs; + todo -= n_descs; + } + + tx_ring->rd_p = rd_p; + tx_ring->qcp_rd_p = qcp_rd_p; + + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_bytes += done_bytes; + r_vec->tx_pkts += done_pkts; + u64_stats_update_end(&r_vec->tx_sync); + + if (!dp->netdev) + return; + + nd_q = netdev_get_tx_queue(dp->netdev, tx_ring->idx); + netdev_tx_completed_queue(nd_q, done_pkts, done_bytes); + if (nfp_nfdk_tx_ring_should_wake(tx_ring)) { + /* Make sure TX thread will see updated tx_ring->rd_p */ + smp_mb(); + + if (unlikely(netif_tx_queue_stopped(nd_q))) + netif_tx_wake_queue(nd_q); + } + + WARN_ONCE(tx_ring->wr_p - tx_ring->rd_p > tx_ring->cnt, + "TX ring corruption rd_p=%u wr_p=%u cnt=%u\n", + tx_ring->rd_p, tx_ring->wr_p, tx_ring->cnt); +} + +static bool nfp_nfdk_xdp_complete(struct nfp_net_tx_ring *tx_ring) +{ + return true; +} + +/* Receive processing */ +static void * +nfp_nfdk_napi_alloc_one(struct nfp_net_dp *dp, dma_addr_t *dma_addr) +{ + void *frag; + + if (!dp->xdp_prog) { + frag = napi_alloc_frag(dp->fl_bufsz); + if (unlikely(!frag)) + return NULL; + } else { + struct page *page; + + page = dev_alloc_page(); + if (unlikely(!page)) + return NULL; + frag = page_address(page); + } + + *dma_addr = nfp_net_dma_map_rx(dp, frag); + if (dma_mapping_error(dp->dev, *dma_addr)) { + nfp_net_free_frag(frag, dp->xdp_prog); + nn_dp_warn(dp, "Failed to map DMA RX buffer\n"); + return NULL; + } + + return frag; +} + +/** + * nfp_nfdk_rx_give_one() - Put mapped skb on the software and hardware rings + * @dp: NFP Net data path struct + * @rx_ring: RX ring structure + * @frag: page fragment buffer + * @dma_addr: DMA address of skb mapping + */ +static void +nfp_nfdk_rx_give_one(const struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring, + void *frag, dma_addr_t dma_addr) +{ + unsigned int wr_idx; + + wr_idx = D_IDX(rx_ring, rx_ring->wr_p); + + nfp_net_dma_sync_dev_rx(dp, dma_addr); + + /* Stash SKB and DMA address away */ + rx_ring->rxbufs[wr_idx].frag = frag; + rx_ring->rxbufs[wr_idx].dma_addr = dma_addr; + + /* Fill freelist descriptor */ + rx_ring->rxds[wr_idx].fld.reserved = 0; + rx_ring->rxds[wr_idx].fld.meta_len_dd = 0; + nfp_desc_set_dma_addr(&rx_ring->rxds[wr_idx].fld, + dma_addr + dp->rx_dma_off); + + rx_ring->wr_p++; + if (!(rx_ring->wr_p % NFP_NET_FL_BATCH)) { + /* Update write pointer of the freelist queue. Make + * sure all writes are flushed before telling the hardware. + */ + wmb(); + nfp_qcp_wr_ptr_add(rx_ring->qcp_fl, NFP_NET_FL_BATCH); + } +} + +/** + * nfp_nfdk_rx_ring_fill_freelist() - Give buffers from the ring to FW + * @dp: NFP Net data path struct + * @rx_ring: RX ring to fill + */ +void nfp_nfdk_rx_ring_fill_freelist(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring) +{ + unsigned int i; + + for (i = 0; i < rx_ring->cnt - 1; i++) + nfp_nfdk_rx_give_one(dp, rx_ring, rx_ring->rxbufs[i].frag, + rx_ring->rxbufs[i].dma_addr); +} + +/** + * nfp_nfdk_rx_csum_has_errors() - group check if rxd has any csum errors + * @flags: RX descriptor flags field in CPU byte order + */ +static int nfp_nfdk_rx_csum_has_errors(u16 flags) +{ + u16 csum_all_checked, csum_all_ok; + + csum_all_checked = flags & __PCIE_DESC_RX_CSUM_ALL; + csum_all_ok = flags & __PCIE_DESC_RX_CSUM_ALL_OK; + + return csum_all_checked != (csum_all_ok << PCIE_DESC_RX_CSUM_OK_SHIFT); +} + +/** + * nfp_nfdk_rx_csum() - set SKB checksum field based on RX descriptor flags + * @dp: NFP Net data path struct + * @r_vec: per-ring structure + * @rxd: Pointer to RX descriptor + * @meta: Parsed metadata prepend + * @skb: Pointer to SKB + */ +static void +nfp_nfdk_rx_csum(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + struct nfp_net_rx_desc *rxd, struct nfp_meta_parsed *meta, + struct sk_buff *skb) +{ + skb_checksum_none_assert(skb); + + if (!(dp->netdev->features & NETIF_F_RXCSUM)) + return; + + if (meta->csum_type) { + skb->ip_summed = meta->csum_type; + skb->csum = meta->csum; + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->hw_csum_rx_complete++; + u64_stats_update_end(&r_vec->rx_sync); + return; + } + + if (nfp_nfdk_rx_csum_has_errors(le16_to_cpu(rxd->rxd.flags))) { + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->hw_csum_rx_error++; + u64_stats_update_end(&r_vec->rx_sync); + return; + } + + /* Assume that the firmware will never report inner CSUM_OK unless outer + * L4 headers were successfully parsed. FW will always report zero UDP + * checksum as CSUM_OK. + */ + if (rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM_OK || + rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM_OK) { + __skb_incr_checksum_unnecessary(skb); + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->hw_csum_rx_ok++; + u64_stats_update_end(&r_vec->rx_sync); + } + + if (rxd->rxd.flags & PCIE_DESC_RX_I_TCP_CSUM_OK || + rxd->rxd.flags & PCIE_DESC_RX_I_UDP_CSUM_OK) { + __skb_incr_checksum_unnecessary(skb); + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->hw_csum_rx_inner_ok++; + u64_stats_update_end(&r_vec->rx_sync); + } +} + +static void +nfp_nfdk_set_hash(struct net_device *netdev, struct nfp_meta_parsed *meta, + unsigned int type, __be32 *hash) +{ + if (!(netdev->features & NETIF_F_RXHASH)) + return; + + switch (type) { + case NFP_NET_RSS_IPV4: + case NFP_NET_RSS_IPV6: + case NFP_NET_RSS_IPV6_EX: + meta->hash_type = PKT_HASH_TYPE_L3; + break; + default: + meta->hash_type = PKT_HASH_TYPE_L4; + break; + } + + meta->hash = get_unaligned_be32(hash); +} + +static bool +nfp_nfdk_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta, + void *data, void *pkt, unsigned int pkt_len, int meta_len) +{ + u32 meta_info; + + meta_info = get_unaligned_be32(data); + data += 4; + + while (meta_info) { + switch (meta_info & NFP_NET_META_FIELD_MASK) { + case NFP_NET_META_HASH: + meta_info >>= NFP_NET_META_FIELD_SIZE; + nfp_nfdk_set_hash(netdev, meta, + meta_info & NFP_NET_META_FIELD_MASK, + (__be32 *)data); + data += 4; + break; + case NFP_NET_META_MARK: + meta->mark = get_unaligned_be32(data); + data += 4; + break; + case NFP_NET_META_PORTID: + meta->portid = get_unaligned_be32(data); + data += 4; + break; + case NFP_NET_META_CSUM: + meta->csum_type = CHECKSUM_COMPLETE; + meta->csum = + (__force __wsum)__get_unaligned_cpu32(data); + data += 4; + break; + case NFP_NET_META_RESYNC_INFO: + if (nfp_net_tls_rx_resync_req(netdev, data, pkt, + pkt_len)) + return false; + data += sizeof(struct nfp_net_tls_resync_req); + break; + default: + return true; + } + + meta_info >>= NFP_NET_META_FIELD_SIZE; + } + + return data != pkt; +} + +static void +nfp_nfdk_rx_drop(const struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, + struct nfp_net_rx_ring *rx_ring, struct nfp_net_rx_buf *rxbuf, + struct sk_buff *skb) +{ + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->rx_drops++; + /* If we have both skb and rxbuf the replacement buffer allocation + * must have failed, count this as an alloc failure. + */ + if (skb && rxbuf) + r_vec->rx_replace_buf_alloc_fail++; + u64_stats_update_end(&r_vec->rx_sync); + + /* skb is build based on the frag, free_skb() would free the frag + * so to be able to reuse it we need an extra ref. + */ + if (skb && rxbuf && skb->head == rxbuf->frag) + page_ref_inc(virt_to_head_page(rxbuf->frag)); + if (rxbuf) + nfp_nfdk_rx_give_one(dp, rx_ring, rxbuf->frag, rxbuf->dma_addr); + if (skb) + dev_kfree_skb_any(skb); +} + +/** + * nfp_nfdk_rx() - receive up to @budget packets on @rx_ring + * @rx_ring: RX ring to receive from + * @budget: NAPI budget + * + * Note, this function is separated out from the napi poll function to + * more cleanly separate packet receive code from other bookkeeping + * functions performed in the napi poll function. + * + * Return: Number of packets received. + */ +static int nfp_nfdk_rx(struct nfp_net_rx_ring *rx_ring, int budget) +{ + struct nfp_net_r_vector *r_vec = rx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + struct nfp_net_tx_ring *tx_ring; + struct bpf_prog *xdp_prog; + bool xdp_tx_cmpl = false; + unsigned int true_bufsz; + struct sk_buff *skb; + int pkts_polled = 0; + struct xdp_buff xdp; + int idx; + + xdp_prog = READ_ONCE(dp->xdp_prog); + true_bufsz = xdp_prog ? PAGE_SIZE : dp->fl_bufsz; + xdp_init_buff(&xdp, PAGE_SIZE - NFP_NET_RX_BUF_HEADROOM, + &rx_ring->xdp_rxq); + tx_ring = r_vec->xdp_ring; + + while (pkts_polled < budget) { + unsigned int meta_len, data_len, meta_off, pkt_len, pkt_off; + struct nfp_net_rx_buf *rxbuf; + struct nfp_net_rx_desc *rxd; + struct nfp_meta_parsed meta; + bool redir_egress = false; + struct net_device *netdev; + dma_addr_t new_dma_addr; + u32 meta_len_xdp = 0; + void *new_frag; + + idx = D_IDX(rx_ring, rx_ring->rd_p); + + rxd = &rx_ring->rxds[idx]; + if (!(rxd->rxd.meta_len_dd & PCIE_DESC_RX_DD)) + break; + + /* Memory barrier to ensure that we won't do other reads + * before the DD bit. + */ + dma_rmb(); + + memset(&meta, 0, sizeof(meta)); + + rx_ring->rd_p++; + pkts_polled++; + + rxbuf = &rx_ring->rxbufs[idx]; + /* < meta_len > + * <-- [rx_offset] --> + * --------------------------------------------------------- + * | [XX] | metadata | packet | XXXX | + * --------------------------------------------------------- + * <---------------- data_len ---------------> + * + * The rx_offset is fixed for all packets, the meta_len can vary + * on a packet by packet basis. If rx_offset is set to zero + * (_RX_OFFSET_DYNAMIC) metadata starts at the beginning of the + * buffer and is immediately followed by the packet (no [XX]). + */ + meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK; + data_len = le16_to_cpu(rxd->rxd.data_len); + pkt_len = data_len - meta_len; + + pkt_off = NFP_NET_RX_BUF_HEADROOM + dp->rx_dma_off; + if (dp->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC) + pkt_off += meta_len; + else + pkt_off += dp->rx_offset; + meta_off = pkt_off - meta_len; + + /* Stats update */ + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->rx_pkts++; + r_vec->rx_bytes += pkt_len; + u64_stats_update_end(&r_vec->rx_sync); + + if (unlikely(meta_len > NFP_NET_MAX_PREPEND || + (dp->rx_offset && meta_len > dp->rx_offset))) { + nn_dp_warn(dp, "oversized RX packet metadata %u\n", + meta_len); + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); + continue; + } + + nfp_net_dma_sync_cpu_rx(dp, rxbuf->dma_addr + meta_off, + data_len); + + if (meta_len) { + if (unlikely(nfp_nfdk_parse_meta(dp->netdev, &meta, + rxbuf->frag + meta_off, + rxbuf->frag + pkt_off, + pkt_len, meta_len))) { + nn_dp_warn(dp, "invalid RX packet metadata\n"); + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, rxbuf, + NULL); + continue; + } + } + + if (xdp_prog && !meta.portid) { + void *orig_data = rxbuf->frag + pkt_off; + int act; + + xdp_prepare_buff(&xdp, + rxbuf->frag + NFP_NET_RX_BUF_HEADROOM, + pkt_off - NFP_NET_RX_BUF_HEADROOM, + pkt_len, true); + + act = bpf_prog_run_xdp(xdp_prog, &xdp); + + pkt_len = xdp.data_end - xdp.data; + pkt_off += xdp.data - orig_data; + + switch (act) { + case XDP_PASS: + meta_len_xdp = xdp.data - xdp.data_meta; + break; + default: + bpf_warn_invalid_xdp_action(dp->netdev, xdp_prog, act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(dp->netdev, xdp_prog, act); + fallthrough; + case XDP_DROP: + nfp_nfdk_rx_give_one(dp, rx_ring, rxbuf->frag, + rxbuf->dma_addr); + continue; + } + } + + if (likely(!meta.portid)) { + netdev = dp->netdev; + } else if (meta.portid == NFP_META_PORT_ID_CTRL) { + struct nfp_net *nn = netdev_priv(dp->netdev); + + nfp_app_ctrl_rx_raw(nn->app, rxbuf->frag + pkt_off, + pkt_len); + nfp_nfdk_rx_give_one(dp, rx_ring, rxbuf->frag, + rxbuf->dma_addr); + continue; + } else { + struct nfp_net *nn; + + nn = netdev_priv(dp->netdev); + netdev = nfp_app_dev_get(nn->app, meta.portid, + &redir_egress); + if (unlikely(!netdev)) { + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, rxbuf, + NULL); + continue; + } + + if (nfp_netdev_is_nfp_repr(netdev)) + nfp_repr_inc_rx_stats(netdev, pkt_len); + } + + skb = build_skb(rxbuf->frag, true_bufsz); + if (unlikely(!skb)) { + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); + continue; + } + new_frag = nfp_nfdk_napi_alloc_one(dp, &new_dma_addr); + if (unlikely(!new_frag)) { + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, rxbuf, skb); + continue; + } + + nfp_net_dma_unmap_rx(dp, rxbuf->dma_addr); + + nfp_nfdk_rx_give_one(dp, rx_ring, new_frag, new_dma_addr); + + skb_reserve(skb, pkt_off); + skb_put(skb, pkt_len); + + skb->mark = meta.mark; + skb_set_hash(skb, meta.hash, meta.hash_type); + + skb_record_rx_queue(skb, rx_ring->idx); + skb->protocol = eth_type_trans(skb, netdev); + + nfp_nfdk_rx_csum(dp, r_vec, rxd, &meta, skb); + + if (rxd->rxd.flags & PCIE_DESC_RX_VLAN) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), + le16_to_cpu(rxd->rxd.vlan)); + if (meta_len_xdp) + skb_metadata_set(skb, meta_len_xdp); + + if (likely(!redir_egress)) { + napi_gro_receive(&rx_ring->r_vec->napi, skb); + } else { + skb->dev = netdev; + skb_reset_network_header(skb); + __skb_push(skb, ETH_HLEN); + dev_queue_xmit(skb); + } + } + + if (xdp_prog) { + if (tx_ring->wr_ptr_add) + nfp_net_tx_xmit_more_flush(tx_ring); + else if (unlikely(tx_ring->wr_p != tx_ring->rd_p) && + !xdp_tx_cmpl) + if (!nfp_nfdk_xdp_complete(tx_ring)) + pkts_polled = budget; + } + + return pkts_polled; +} + +/** + * nfp_nfdk_poll() - napi poll function + * @napi: NAPI structure + * @budget: NAPI budget + * + * Return: number of packets polled. + */ +int nfp_nfdk_poll(struct napi_struct *napi, int budget) +{ + struct nfp_net_r_vector *r_vec = + container_of(napi, struct nfp_net_r_vector, napi); + unsigned int pkts_polled = 0; + + if (r_vec->tx_ring) + nfp_nfdk_tx_complete(r_vec->tx_ring, budget); + if (r_vec->rx_ring) + pkts_polled = nfp_nfdk_rx(r_vec->rx_ring, budget); + + if (pkts_polled < budget) + if (napi_complete_done(napi, pkts_polled)) + nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry); + + if (r_vec->nfp_net->rx_coalesce_adapt_on && r_vec->rx_ring) { + struct dim_sample dim_sample = {}; + unsigned int start; + u64 pkts, bytes; + + do { + start = u64_stats_fetch_begin(&r_vec->rx_sync); + pkts = r_vec->rx_pkts; + bytes = r_vec->rx_bytes; + } while (u64_stats_fetch_retry(&r_vec->rx_sync, start)); + + dim_update_sample(r_vec->event_ctr, pkts, bytes, &dim_sample); + net_dim(&r_vec->rx_dim, dim_sample); + } + + if (r_vec->nfp_net->tx_coalesce_adapt_on && r_vec->tx_ring) { + struct dim_sample dim_sample = {}; + unsigned int start; + u64 pkts, bytes; + + do { + start = u64_stats_fetch_begin(&r_vec->tx_sync); + pkts = r_vec->tx_pkts; + bytes = r_vec->tx_bytes; + } while (u64_stats_fetch_retry(&r_vec->tx_sync, start)); + + dim_update_sample(r_vec->event_ctr, pkts, bytes, &dim_sample); + net_dim(&r_vec->tx_dim, dim_sample); + } + + return pkts_polled; +} + +/* Control device data path + */ + +bool +nfp_nfdk_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, + struct sk_buff *skb, bool old) +{ + u32 cnt, tmp_dlen, dlen_type = 0; + struct nfp_net_tx_ring *tx_ring; + struct nfp_nfdk_tx_buf *txbuf; + struct nfp_nfdk_tx_desc *txd; + unsigned int dma_len, type; + struct nfp_net_dp *dp; + dma_addr_t dma_addr; + u64 metadata = 0; + int wr_idx; + + dp = &r_vec->nfp_net->dp; + tx_ring = r_vec->tx_ring; + + if (WARN_ON_ONCE(skb_shinfo(skb)->nr_frags)) { + nn_dp_warn(dp, "Driver's CTRL TX does not implement gather\n"); + goto err_free; + } + + /* Don't bother counting frags, assume the worst */ + if (unlikely(nfp_net_tx_full(tx_ring, NFDK_TX_DESC_STOP_CNT))) { + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_busy++; + u64_stats_update_end(&r_vec->tx_sync); + if (!old) + __skb_queue_tail(&r_vec->queue, skb); + else + __skb_queue_head(&r_vec->queue, skb); + return NETDEV_TX_BUSY; + } + + if (nfp_app_ctrl_has_meta(nn->app)) { + if (unlikely(skb_headroom(skb) < 8)) { + nn_dp_warn(dp, "CTRL TX on skb without headroom\n"); + goto err_free; + } + metadata = NFDK_DESC_TX_CHAIN_META; + put_unaligned_be32(NFP_META_PORT_ID_CTRL, skb_push(skb, 4)); + put_unaligned_be32(FIELD_PREP(NFDK_META_LEN, 8) | + FIELD_PREP(NFDK_META_FIELDS, + NFP_NET_META_PORTID), + skb_push(skb, 4)); + } + + if (nfp_nfdk_tx_maybe_close_block(tx_ring, 0, skb)) + goto err_free; + + /* DMA map all */ + wr_idx = D_IDX(tx_ring, tx_ring->wr_p); + txd = &tx_ring->ktxds[wr_idx]; + txbuf = &tx_ring->ktxbufs[wr_idx]; + + dma_len = skb_headlen(skb); + if (dma_len < NFDK_TX_MAX_DATA_PER_HEAD) + type = NFDK_DESC_TX_TYPE_SIMPLE; + else + type = NFDK_DESC_TX_TYPE_GATHER; + + dma_addr = dma_map_single(dp->dev, skb->data, dma_len, DMA_TO_DEVICE); + if (dma_mapping_error(dp->dev, dma_addr)) + goto err_warn_dma; + + txbuf->skb = skb; + txbuf++; + + txbuf->dma_addr = dma_addr; + txbuf++; + + dma_len -= 1; + dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN_HEAD, dma_len) | + FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type); + + txd->dma_len_type = cpu_to_le16(dlen_type); + nfp_desc_set_dma_addr(txd, dma_addr); + + tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD; + dma_len -= tmp_dlen; + dma_addr += tmp_dlen + 1; + txd++; + + while (dma_len > 0) { + dma_len -= 1; + dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len); + txd->dma_len_type = cpu_to_le16(dlen_type); + nfp_desc_set_dma_addr(txd, dma_addr); + + dlen_type &= NFDK_DESC_TX_DMA_LEN; + dma_len -= dlen_type; + dma_addr += dlen_type + 1; + txd++; + } + + (txd - 1)->dma_len_type = cpu_to_le16(dlen_type | NFDK_DESC_TX_EOP); + + /* Metadata desc */ + txd->raw = cpu_to_le64(metadata); + txd++; + + cnt = txd - tx_ring->ktxds - wr_idx; + if (unlikely(round_down(wr_idx, NFDK_TX_DESC_BLOCK_CNT) != + round_down(wr_idx + cnt - 1, NFDK_TX_DESC_BLOCK_CNT))) + goto err_warn_overflow; + + tx_ring->wr_p += cnt; + if (tx_ring->wr_p % NFDK_TX_DESC_BLOCK_CNT) + tx_ring->data_pending += skb->len; + else + tx_ring->data_pending = 0; + + tx_ring->wr_ptr_add += cnt; + nfp_net_tx_xmit_more_flush(tx_ring); + + return NETDEV_TX_OK; + +err_warn_overflow: + WARN_ONCE(1, "unable to fit packet into a descriptor wr_idx:%d head:%d frags:%d cnt:%d", + wr_idx, skb_headlen(skb), 0, cnt); + txbuf--; + dma_unmap_single(dp->dev, txbuf->dma_addr, + skb_headlen(skb), DMA_TO_DEVICE); + txbuf->raw = 0; +err_warn_dma: + nn_dp_warn(dp, "Failed to map DMA TX buffer\n"); +err_free: + u64_stats_update_begin(&r_vec->tx_sync); + r_vec->tx_errors++; + u64_stats_update_end(&r_vec->tx_sync); + dev_kfree_skb_any(skb); + return NETDEV_TX_OK; +} + +static void __nfp_ctrl_tx_queued(struct nfp_net_r_vector *r_vec) +{ + struct sk_buff *skb; + + while ((skb = __skb_dequeue(&r_vec->queue))) + if (nfp_nfdk_ctrl_tx_one(r_vec->nfp_net, r_vec, skb, true)) + return; +} + +static bool +nfp_ctrl_meta_ok(struct nfp_net *nn, void *data, unsigned int meta_len) +{ + u32 meta_type, meta_tag; + + if (!nfp_app_ctrl_has_meta(nn->app)) + return !meta_len; + + if (meta_len != 8) + return false; + + meta_type = get_unaligned_be32(data); + meta_tag = get_unaligned_be32(data + 4); + + return (meta_type == NFP_NET_META_PORTID && + meta_tag == NFP_META_PORT_ID_CTRL); +} + +static bool +nfp_ctrl_rx_one(struct nfp_net *nn, struct nfp_net_dp *dp, + struct nfp_net_r_vector *r_vec, struct nfp_net_rx_ring *rx_ring) +{ + unsigned int meta_len, data_len, meta_off, pkt_len, pkt_off; + struct nfp_net_rx_buf *rxbuf; + struct nfp_net_rx_desc *rxd; + dma_addr_t new_dma_addr; + struct sk_buff *skb; + void *new_frag; + int idx; + + idx = D_IDX(rx_ring, rx_ring->rd_p); + + rxd = &rx_ring->rxds[idx]; + if (!(rxd->rxd.meta_len_dd & PCIE_DESC_RX_DD)) + return false; + + /* Memory barrier to ensure that we won't do other reads + * before the DD bit. + */ + dma_rmb(); + + rx_ring->rd_p++; + + rxbuf = &rx_ring->rxbufs[idx]; + meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK; + data_len = le16_to_cpu(rxd->rxd.data_len); + pkt_len = data_len - meta_len; + + pkt_off = NFP_NET_RX_BUF_HEADROOM + dp->rx_dma_off; + if (dp->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC) + pkt_off += meta_len; + else + pkt_off += dp->rx_offset; + meta_off = pkt_off - meta_len; + + /* Stats update */ + u64_stats_update_begin(&r_vec->rx_sync); + r_vec->rx_pkts++; + r_vec->rx_bytes += pkt_len; + u64_stats_update_end(&r_vec->rx_sync); + + nfp_net_dma_sync_cpu_rx(dp, rxbuf->dma_addr + meta_off, data_len); + + if (unlikely(!nfp_ctrl_meta_ok(nn, rxbuf->frag + meta_off, meta_len))) { + nn_dp_warn(dp, "incorrect metadata for ctrl packet (%d)\n", + meta_len); + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); + return true; + } + + skb = build_skb(rxbuf->frag, dp->fl_bufsz); + if (unlikely(!skb)) { + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL); + return true; + } + new_frag = nfp_nfdk_napi_alloc_one(dp, &new_dma_addr); + if (unlikely(!new_frag)) { + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, rxbuf, skb); + return true; + } + + nfp_net_dma_unmap_rx(dp, rxbuf->dma_addr); + + nfp_nfdk_rx_give_one(dp, rx_ring, new_frag, new_dma_addr); + + skb_reserve(skb, pkt_off); + skb_put(skb, pkt_len); + + nfp_app_ctrl_rx(nn->app, skb); + + return true; +} + +static bool nfp_ctrl_rx(struct nfp_net_r_vector *r_vec) +{ + struct nfp_net_rx_ring *rx_ring = r_vec->rx_ring; + struct nfp_net *nn = r_vec->nfp_net; + struct nfp_net_dp *dp = &nn->dp; + unsigned int budget = 512; + + while (nfp_ctrl_rx_one(nn, dp, r_vec, rx_ring) && budget--) + continue; + + return budget; +} + +void nfp_nfdk_ctrl_poll(struct tasklet_struct *t) +{ + struct nfp_net_r_vector *r_vec = from_tasklet(r_vec, t, tasklet); + + spin_lock(&r_vec->lock); + nfp_nfdk_tx_complete(r_vec->tx_ring, 0); + __nfp_ctrl_tx_queued(r_vec); + spin_unlock(&r_vec->lock); + + if (nfp_ctrl_rx(r_vec)) { + nfp_net_irq_unmask(r_vec->nfp_net, r_vec->irq_entry); + } else { + tasklet_schedule(&r_vec->tasklet); + nn_dp_warn(&r_vec->nfp_net->dp, + "control message budget exceeded!\n"); + } +} diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h new file mode 100644 index 000000000000..5107c4f03feb --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h @@ -0,0 +1,112 @@ +/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ +/* Copyright (C) 2019 Netronome Systems, Inc. */ + +#ifndef _NFP_DP_NFDK_H_ +#define _NFP_DP_NFDK_H_ + +#include +#include + +#define NFDK_TX_DESC_PER_SIMPLE_PKT 2 + +#define NFDK_TX_MAX_DATA_PER_HEAD SZ_4K +#define NFDK_TX_MAX_DATA_PER_DESC SZ_16K +#define NFDK_TX_DESC_BLOCK_SZ 256 +#define NFDK_TX_DESC_BLOCK_CNT (NFDK_TX_DESC_BLOCK_SZ / \ + sizeof(struct nfp_nfdk_tx_desc)) +#define NFDK_TX_DESC_STOP_CNT (NFDK_TX_DESC_BLOCK_CNT * \ + NFDK_TX_DESC_PER_SIMPLE_PKT) +#define NFDK_TX_MAX_DATA_PER_BLOCK SZ_64K +#define NFDK_TX_DESC_GATHER_MAX 17 + +/* TX descriptor format */ + +#define NFDK_DESC_TX_MSS_MASK GENMASK(13, 0) + +#define NFDK_DESC_TX_CHAIN_META BIT(3) +#define NFDK_DESC_TX_ENCAP BIT(2) +#define NFDK_DESC_TX_L4_CSUM BIT(1) +#define NFDK_DESC_TX_L3_CSUM BIT(0) + +#define NFDK_DESC_TX_DMA_LEN_HEAD GENMASK(11, 0) +#define NFDK_DESC_TX_TYPE_HEAD GENMASK(15, 12) +#define NFDK_DESC_TX_DMA_LEN GENMASK(13, 0) +#define NFDK_DESC_TX_TYPE_NOP 0 +#define NFDK_DESC_TX_TYPE_GATHER 1 +#define NFDK_DESC_TX_TYPE_TSO 2 +#define NFDK_DESC_TX_TYPE_SIMPLE 8 +#define NFDK_DESC_TX_EOP BIT(14) + +#define NFDK_META_LEN GENMASK(7, 0) +#define NFDK_META_FIELDS GENMASK(31, 8) + +#define D_BLOCK_CPL(idx) (NFDK_TX_DESC_BLOCK_CNT - \ + (idx) % NFDK_TX_DESC_BLOCK_CNT) + +struct nfp_nfdk_tx_desc { + union { + struct { + u8 dma_addr_hi; /* High bits of host buf address */ + u8 padding; /* Must be zero */ + __le16 dma_len_type; /* Length to DMA for this desc */ + __le32 dma_addr_lo; /* Low 32bit of host buf addr */ + }; + + struct { + __le16 mss; /* MSS to be used for LSO */ + u8 lso_hdrlen; /* LSO, TCP payload offset */ + u8 lso_totsegs; /* LSO, total segments */ + u8 l3_offset; /* L3 header offset */ + u8 l4_offset; /* L4 header offset */ + __le16 lso_meta_res; /* Rsvd bits in TSO metadata */ + }; + + struct { + u8 flags; /* TX Flags, see @NFDK_DESC_TX_* */ + u8 reserved[7]; /* meta byte placeholder */ + }; + + __le32 vals[2]; + __le64 raw; + }; +}; + +struct nfp_nfdk_tx_buf { + union { + /* First slot */ + union { + struct sk_buff *skb; + void *frag; + }; + + /* 1 + nr_frags next slots */ + dma_addr_t dma_addr; + + /* TSO (optional) */ + struct { + u32 pkt_cnt; + u32 real_len; + }; + + u64 raw; + }; +}; + +static inline int nfp_nfdk_headlen_to_segs(unsigned int headlen) +{ + /* First descriptor fits less data, so adjust for that */ + return DIV_ROUND_UP(headlen + + NFDK_TX_MAX_DATA_PER_DESC - + NFDK_TX_MAX_DATA_PER_HEAD, + NFDK_TX_MAX_DATA_PER_DESC); +} + +int nfp_nfdk_poll(struct napi_struct *napi, int budget); +netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev); +bool +nfp_nfdk_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, + struct sk_buff *skb, bool old); +void nfp_nfdk_ctrl_poll(struct tasklet_struct *t); +void nfp_nfdk_rx_ring_fill_freelist(struct nfp_net_dp *dp, + struct nfp_net_rx_ring *rx_ring); +#endif diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/rings.c b/drivers/net/ethernet/netronome/nfp/nfdk/rings.c new file mode 100644 index 000000000000..301f11108826 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfdk/rings.c @@ -0,0 +1,195 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2019 Netronome Systems, Inc. */ + +#include + +#include "../nfp_net.h" +#include "../nfp_net_dp.h" +#include "nfdk.h" + +static void +nfp_nfdk_tx_ring_reset(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + struct device *dev = dp->dev; + struct netdev_queue *nd_q; + + while (!tx_ring->is_xdp && tx_ring->rd_p != tx_ring->wr_p) { + const skb_frag_t *frag, *fend; + unsigned int size, n_descs = 1; + struct nfp_nfdk_tx_buf *txbuf; + int nr_frags, rd_idx; + struct sk_buff *skb; + + rd_idx = D_IDX(tx_ring, tx_ring->rd_p); + txbuf = &tx_ring->ktxbufs[rd_idx]; + + skb = txbuf->skb; + if (!skb) { + n_descs = D_BLOCK_CPL(tx_ring->rd_p); + goto next; + } + + nr_frags = skb_shinfo(skb)->nr_frags; + txbuf++; + + /* Unmap head */ + size = skb_headlen(skb); + dma_unmap_single(dev, txbuf->dma_addr, size, DMA_TO_DEVICE); + n_descs += nfp_nfdk_headlen_to_segs(size); + txbuf++; + + frag = skb_shinfo(skb)->frags; + fend = frag + nr_frags; + for (; frag < fend; frag++) { + size = skb_frag_size(frag); + dma_unmap_page(dev, txbuf->dma_addr, + skb_frag_size(frag), DMA_TO_DEVICE); + n_descs += DIV_ROUND_UP(size, + NFDK_TX_MAX_DATA_PER_DESC); + txbuf++; + } + + if (skb_is_gso(skb)) + n_descs++; + + dev_kfree_skb_any(skb); +next: + tx_ring->rd_p += n_descs; + } + + memset(tx_ring->txds, 0, tx_ring->size); + tx_ring->data_pending = 0; + tx_ring->wr_p = 0; + tx_ring->rd_p = 0; + tx_ring->qcp_rd_p = 0; + tx_ring->wr_ptr_add = 0; + + if (tx_ring->is_xdp || !dp->netdev) + return; + + nd_q = netdev_get_tx_queue(dp->netdev, tx_ring->idx); + netdev_tx_reset_queue(nd_q); +} + +static void nfp_nfdk_tx_ring_free(struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + + kvfree(tx_ring->ktxbufs); + + if (tx_ring->ktxds) + dma_free_coherent(dp->dev, tx_ring->size, + tx_ring->ktxds, tx_ring->dma); + + tx_ring->cnt = 0; + tx_ring->txbufs = NULL; + tx_ring->txds = NULL; + tx_ring->dma = 0; + tx_ring->size = 0; +} + +static int +nfp_nfdk_tx_ring_alloc(struct nfp_net_dp *dp, struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + + tx_ring->cnt = dp->txd_cnt * NFDK_TX_DESC_PER_SIMPLE_PKT; + tx_ring->size = array_size(tx_ring->cnt, sizeof(*tx_ring->ktxds)); + tx_ring->ktxds = dma_alloc_coherent(dp->dev, tx_ring->size, + &tx_ring->dma, + GFP_KERNEL | __GFP_NOWARN); + if (!tx_ring->ktxds) { + netdev_warn(dp->netdev, "failed to allocate TX descriptor ring memory, requested descriptor count: %d, consider lowering descriptor count\n", + tx_ring->cnt); + goto err_alloc; + } + + tx_ring->ktxbufs = kvcalloc(tx_ring->cnt, sizeof(*tx_ring->ktxbufs), + GFP_KERNEL); + if (!tx_ring->ktxbufs) + goto err_alloc; + + if (!tx_ring->is_xdp && dp->netdev) + netif_set_xps_queue(dp->netdev, &r_vec->affinity_mask, + tx_ring->idx); + + return 0; + +err_alloc: + nfp_nfdk_tx_ring_free(tx_ring); + return -ENOMEM; +} + +static void +nfp_nfdk_tx_ring_bufs_free(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring) +{ +} + +static int +nfp_nfdk_tx_ring_bufs_alloc(struct nfp_net_dp *dp, + struct nfp_net_tx_ring *tx_ring) +{ + return 0; +} + +static void +nfp_nfdk_print_tx_descs(struct seq_file *file, + struct nfp_net_r_vector *r_vec, + struct nfp_net_tx_ring *tx_ring, + u32 d_rd_p, u32 d_wr_p) +{ + struct nfp_nfdk_tx_desc *txd; + u32 txd_cnt = tx_ring->cnt; + int i; + + for (i = 0; i < txd_cnt; i++) { + txd = &tx_ring->ktxds[i]; + + seq_printf(file, "%04d: 0x%08x 0x%08x 0x%016llx", i, + txd->vals[0], txd->vals[1], tx_ring->ktxbufs[i].raw); + + if (i == tx_ring->rd_p % txd_cnt) + seq_puts(file, " H_RD"); + if (i == tx_ring->wr_p % txd_cnt) + seq_puts(file, " H_WR"); + if (i == d_rd_p % txd_cnt) + seq_puts(file, " D_RD"); + if (i == d_wr_p % txd_cnt) + seq_puts(file, " D_WR"); + + seq_putc(file, '\n'); + } +} + +#define NFP_NFDK_CFG_CTRL_SUPPORTED \ + (NFP_NET_CFG_CTRL_ENABLE | NFP_NET_CFG_CTRL_PROMISC | \ + NFP_NET_CFG_CTRL_L2BC | NFP_NET_CFG_CTRL_L2MC | \ + NFP_NET_CFG_CTRL_RXCSUM | NFP_NET_CFG_CTRL_TXCSUM | \ + NFP_NET_CFG_CTRL_RXVLAN | \ + NFP_NET_CFG_CTRL_GATHER | NFP_NET_CFG_CTRL_LSO | \ + NFP_NET_CFG_CTRL_CTAG_FILTER | NFP_NET_CFG_CTRL_CMSG_DATA | \ + NFP_NET_CFG_CTRL_RINGCFG | NFP_NET_CFG_CTRL_IRQMOD | \ + NFP_NET_CFG_CTRL_TXRWB | \ + NFP_NET_CFG_CTRL_VXLAN | NFP_NET_CFG_CTRL_NVGRE | \ + NFP_NET_CFG_CTRL_BPF | NFP_NET_CFG_CTRL_LSO2 | \ + NFP_NET_CFG_CTRL_RSS2 | NFP_NET_CFG_CTRL_CSUM_COMPLETE | \ + NFP_NET_CFG_CTRL_LIVE_ADDR) + +const struct nfp_dp_ops nfp_nfdk_ops = { + .version = NFP_NFD_VER_NFDK, + .tx_min_desc_per_pkt = NFDK_TX_DESC_PER_SIMPLE_PKT, + .cap_mask = NFP_NFDK_CFG_CTRL_SUPPORTED, + .poll = nfp_nfdk_poll, + .ctrl_poll = nfp_nfdk_ctrl_poll, + .xmit = nfp_nfdk_tx, + .ctrl_tx_one = nfp_nfdk_ctrl_tx_one, + .rx_ring_fill_freelist = nfp_nfdk_rx_ring_fill_freelist, + .tx_ring_alloc = nfp_nfdk_tx_ring_alloc, + .tx_ring_reset = nfp_nfdk_tx_ring_reset, + .tx_ring_free = nfp_nfdk_tx_ring_free, + .tx_ring_bufs_alloc = nfp_nfdk_tx_ring_bufs_alloc, + .tx_ring_bufs_free = nfp_nfdk_tx_ring_bufs_free, + .print_tx_descs = nfp_nfdk_print_tx_descs +}; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h index e7646377de37..428783b7018b 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h @@ -108,6 +108,9 @@ struct xsk_buff_pool; struct nfp_nfd3_tx_desc; struct nfp_nfd3_tx_buf; +struct nfp_nfdk_tx_desc; +struct nfp_nfdk_tx_buf; + /* Convenience macro for wrapping descriptor index on ring size */ #define D_IDX(ring, idx) ((idx) & ((ring)->cnt - 1)) @@ -125,6 +128,7 @@ struct nfp_nfd3_tx_buf; * struct nfp_net_tx_ring - TX ring structure * @r_vec: Back pointer to ring vector structure * @idx: Ring index from Linux's perspective + * @data_pending: number of bytes added to current block (NFDK only) * @qcp_q: Pointer to base of the QCP TX queue * @txrwb: TX pointer write back area * @cnt: Size of the queue in number of descriptors @@ -133,8 +137,10 @@ struct nfp_nfd3_tx_buf; * @qcp_rd_p: Local copy of QCP TX queue read pointer * @wr_ptr_add: Accumulated number of buffers to add to QCP write pointer * (used for .xmit_more delayed kick) - * @txbufs: Array of transmitted TX buffers, to free on transmit - * @txds: Virtual address of TX ring in host memory + * @txbufs: Array of transmitted TX buffers, to free on transmit (NFD3) + * @ktxbufs: Array of transmitted TX buffers, to free on transmit (NFDK) + * @txds: Virtual address of TX ring in host memory (NFD3) + * @ktxds: Virtual address of TX ring in host memory (NFDK) * * @qcidx: Queue Controller Peripheral (QCP) queue index for the TX queue * @dma: DMA address of the TX ring @@ -144,7 +150,8 @@ struct nfp_nfd3_tx_buf; struct nfp_net_tx_ring { struct nfp_net_r_vector *r_vec; - u32 idx; + u16 idx; + u16 data_pending; u8 __iomem *qcp_q; u64 *txrwb; @@ -155,8 +162,14 @@ struct nfp_net_tx_ring { u32 wr_ptr_add; - struct nfp_nfd3_tx_buf *txbufs; - struct nfp_nfd3_tx_desc *txds; + union { + struct nfp_nfd3_tx_buf *txbufs; + struct nfp_nfdk_tx_buf *ktxbufs; + }; + union { + struct nfp_nfd3_tx_desc *txds; + struct nfp_nfdk_tx_desc *ktxds; + }; /* Cold data follows */ int qcidx; @@ -860,10 +873,12 @@ static inline void nn_ctrl_bar_unlock(struct nfp_net *nn) extern const char nfp_driver_version[]; extern const struct net_device_ops nfp_nfd3_netdev_ops; +extern const struct net_device_ops nfp_nfdk_netdev_ops; static inline bool nfp_netdev_is_nfp_net(struct net_device *netdev) { - return netdev->netdev_ops == &nfp_nfd3_netdev_ops; + return netdev->netdev_ops == &nfp_nfd3_netdev_ops || + netdev->netdev_ops == &nfp_nfdk_netdev_ops; } static inline int nfp_net_coalesce_para_check(u32 usecs, u32 pkts) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 0aa91065a7cb..b412670d89b2 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -1920,6 +1920,33 @@ const struct net_device_ops nfp_nfd3_netdev_ops = { .ndo_get_devlink_port = nfp_devlink_get_devlink_port, }; +const struct net_device_ops nfp_nfdk_netdev_ops = { + .ndo_init = nfp_app_ndo_init, + .ndo_uninit = nfp_app_ndo_uninit, + .ndo_open = nfp_net_netdev_open, + .ndo_stop = nfp_net_netdev_close, + .ndo_start_xmit = nfp_net_tx, + .ndo_get_stats64 = nfp_net_stat64, + .ndo_vlan_rx_add_vid = nfp_net_vlan_rx_add_vid, + .ndo_vlan_rx_kill_vid = nfp_net_vlan_rx_kill_vid, + .ndo_set_vf_mac = nfp_app_set_vf_mac, + .ndo_set_vf_vlan = nfp_app_set_vf_vlan, + .ndo_set_vf_spoofchk = nfp_app_set_vf_spoofchk, + .ndo_set_vf_trust = nfp_app_set_vf_trust, + .ndo_get_vf_config = nfp_app_get_vf_config, + .ndo_set_vf_link_state = nfp_app_set_vf_link_state, + .ndo_setup_tc = nfp_port_setup_tc, + .ndo_tx_timeout = nfp_net_tx_timeout, + .ndo_set_rx_mode = nfp_net_set_rx_mode, + .ndo_change_mtu = nfp_net_change_mtu, + .ndo_set_mac_address = nfp_net_set_mac_address, + .ndo_set_features = nfp_net_set_features, + .ndo_features_check = nfp_net_features_check, + .ndo_get_phys_port_name = nfp_net_get_phys_port_name, + .ndo_bpf = nfp_net_xdp, + .ndo_get_devlink_port = nfp_devlink_get_devlink_port, +}; + static int nfp_udp_tunnel_sync(struct net_device *netdev, unsigned int table) { struct nfp_net *nn = netdev_priv(netdev); @@ -2042,6 +2069,16 @@ nfp_net_alloc(struct pci_dev *pdev, const struct nfp_dev_info *dev_info, case NFP_NET_CFG_VERSION_DP_NFD3: nn->dp.ops = &nfp_nfd3_ops; break; + case NFP_NET_CFG_VERSION_DP_NFDK: + if (nn->fw_ver.major < 5) { + dev_err(&pdev->dev, + "NFDK must use ABI 5 or newer, found: %d\n", + nn->fw_ver.major); + err = -EINVAL; + goto err_free_nn; + } + nn->dp.ops = &nfp_nfdk_ops; + break; default: err = -EINVAL; goto err_free_nn; @@ -2268,6 +2305,9 @@ static void nfp_net_netdev_init(struct nfp_net *nn) case NFP_NFD_VER_NFD3: netdev->netdev_ops = &nfp_nfd3_netdev_ops; break; + case NFP_NFD_VER_NFDK: + netdev->netdev_ops = &nfp_nfdk_netdev_ops; + break; } netdev->watchdog_timeo = msecs_to_jiffies(5 * 1000); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h index 7f04a5275a2d..8892a94f00c3 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h @@ -151,6 +151,7 @@ #define NFP_NET_CFG_VERSION 0x0030 #define NFP_NET_CFG_VERSION_RESERVED_MASK (0xfe << 24) #define NFP_NET_CFG_VERSION_DP_NFD3 0 +#define NFP_NET_CFG_VERSION_DP_NFDK 1 #define NFP_NET_CFG_VERSION_DP_MASK 1 #define NFP_NET_CFG_VERSION_CLASS_MASK (0xff << 16) #define NFP_NET_CFG_VERSION_CLASS(x) (((x) & 0xff) << 16) diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h index 237ca1d9c886..c934cc2d3208 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_dp.h @@ -109,6 +109,7 @@ void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring); enum nfp_nfd_version { NFP_NFD_VER_NFD3, + NFP_NFD_VER_NFDK, }; /** @@ -207,6 +208,7 @@ nfp_net_debugfs_print_tx_descs(struct seq_file *file, struct nfp_net_dp *dp, } extern const struct nfp_dp_ops nfp_nfd3_ops; +extern const struct nfp_dp_ops nfp_nfdk_ops; netdev_tx_t nfp_net_tx(struct sk_buff *skb, struct net_device *netdev); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.c b/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.c index 50a59aad70f4..86829446c637 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_xsk.c @@ -112,6 +112,10 @@ int nfp_net_xsk_setup_pool(struct net_device *netdev, struct nfp_net_dp *dp; int err; + /* NFDK doesn't implement xsk yet. */ + if (nn->dp.ops->version == NFP_NFD_VER_NFDK) + return -EOPNOTSUPP; + /* Reject on old FWs so we can drop some checks on datapath. */ if (nn->dp.rx_offset != NFP_NET_CFG_RX_OFFSET_DYNAMIC) return -EOPNOTSUPP; From patchwork Mon Mar 21 10:42:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Horman X-Patchwork-Id: 12787083 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E89AC433F5 for ; Mon, 21 Mar 2022 10:43:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346440AbiCUKo3 (ORCPT ); Mon, 21 Mar 2022 06:44:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346453AbiCUKoP (ORCPT ); Mon, 21 Mar 2022 06:44:15 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 787E114AC87 for ; Mon, 21 Mar 2022 03:42:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=erEXAWJ4mCZ+vJa8SYBGYNYUm7vjNniMx2js7+JqHREJ7KL7Q29zSIaa0XYeNp5qGF1mNirRleaadE5MWRjF3FeXwnvCnp1EwsdEr+aXUbojfy6de0oVqVNv8S8O5ikP+SuvUgrQfj8I10zVcdi5Z4Xl0yde1To1m6UKl4D0U0kp+/+xiWXHl6a5IoRCQusyJwFzWp0uX7m5mxX/qu0TAIUtrhEV+sSgt9kvOQERw1ORHIOSwRiIt4KFIyxqPC6jmhq1tKF/wNhXSIl1unPCSaZR8ylBy8T9QAsPu/O3x16+8tbfEQ8J3Kk4MRmr+Lg3HWguntzkafHhPngDmGxMIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Iek99xu795rIT4PwJPOY/uUr49z9eODgP+STBRniy+4=; b=akp4KzohRecJK5aOnv4SoN1y+9GkKMV3vZsMjd9KbFUoxSQm08cER5AOioPCdsXEr+7RvqY50WKrP7b3IhVS5fwzWgv7eYIa/9dD9bwkJcvl73YvquQe0byyrd3NT9/bm7WV4yiFQyOyj63Rcuv2MBjlKre40Xk4uOlnSESsAEA9/+YPur88Ay7F559hZfg9+QEMr7thIUjNMmbz1SSUZvuNrWnhhVg2w5WBwvXTLGs+6V/20xpYSaWlfs/JJeEwRuh8v4Wj1JTFnp522zxMDRQNtherMvMCSRtrS0+XUMEYydj8SbTPXr1fuNWLIvr+aoOrc4VFtfdy+YiHOJvpsg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Iek99xu795rIT4PwJPOY/uUr49z9eODgP+STBRniy+4=; b=jhg7yPVeQmPF3AEJJaI2beXqff1x0z4SI0TVm8IjH5OhMCGFsQfifyjS9ZDwv4/+kLrTVUfTsezDB126e+0oTHrb+NG8yC2hi8WxMPvUTKu1PoUm6iEzY8KHzAuOkHYgfz6eCZ3arJD6eb2T1tYOy5POntvu1BcCTR/rtbOvO9Q= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) by CY4PR13MB1512.namprd13.prod.outlook.com (2603:10b6:903:12f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5102.15; Mon, 21 Mar 2022 10:42:40 +0000 Received: from PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113]) by PH0PR13MB4842.namprd13.prod.outlook.com ([fe80::7077:d057:db6b:3113%4]) with mapi id 15.20.5102.015; Mon, 21 Mar 2022 10:42:40 +0000 From: Simon Horman To: David Miller , Jakub Kicinski Cc: Yinjun Zhang , netdev@vger.kernel.org, oss-drivers@corigine.com Subject: [PATCH net-next v2 10/10] nfp: nfdk: implement xdp tx path for NFDK Date: Mon, 21 Mar 2022 11:42:09 +0100 Message-Id: <20220321104209.273535-11-simon.horman@corigine.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220321104209.273535-1-simon.horman@corigine.com> References: <20220321104209.273535-1-simon.horman@corigine.com> X-ClientProxiedBy: AM0PR02CA0001.eurprd02.prod.outlook.com (2603:10a6:208:3e::14) To PH0PR13MB4842.namprd13.prod.outlook.com (2603:10b6:510:78::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 45b62dd9-fb8e-4d24-9c67-08da0b278591 X-MS-TrafficTypeDiagnostic: CY4PR13MB1512:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: YUoJZKcIM9W3Yb+EdzRa0u0qkc2rPisnxDb+y70vYnQAankhgE1MgLyQSeXOUYT9E3KI4q6rmawlJp3kmta++ToCo4D03MklDM+QS6yGGoOKtN06wlvXmp19it2eEZ2sg9ejtuncJ33tE2ZXSOc32KpWIrNux+HPLhdpBoLCYQ+hoXYvAwIjD27DerGuRE2ikRcRxvr7D65rRBoI9dYr85vTGcR45Vh3bQXqY877mQXa/KM5Jnm5pCUP1AvJAaSJ/T9KdMvJZ6e1VZv4ozlzjiDLaWyQhzYrtOObN7XSJEZIkr55hkn9CaaDgKq0ZiPWvT3mRM/6sU2cD73j5A1ksSLOIGkP6S/iZEwK04Ge19H4cdeknWENxCrGBdk8EyDYUopzQj8xO05brrCEuCj0L5IaGimMUoLuunEvqnGCEVMNTWffmx94ENW5B+dy3qg8VVLs9Xe0nOEGP+hDPE+/3o+fs5IoKzTroyzLXmykgVlHzV1g7GFzqN64bUu+j3f3MvUHZ0nO5WuQ3qsL6XfE2uGAtlA5RVJYHPR2eNJeOXvIv/q7ZGpZ9kD5AJLjThsHyshQTFN73OtNTaZSSa74R8SYuGQgyNUwe3UL/HHd9/EaFWZydMMT4sg//6Vl/60lWDTP3nu+9KiqFn5rrdJaQQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR13MB4842.namprd13.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(39830400003)(136003)(396003)(376002)(366004)(346002)(8936002)(186003)(1076003)(6512007)(110136005)(6506007)(2906002)(8676002)(5660300002)(86362001)(52116002)(107886003)(6666004)(6486002)(44832011)(66946007)(4326008)(38100700002)(66556008)(2616005)(66476007)(316002)(83380400001)(508600001)(36756003);DIR:OUT;SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: WlMBl1YowMTAvlS0SrbX8TBEAcx2rzLLpafpITjFPLa00i1wSCsa666k/B/jF03ttTtGdoEyG9POX+6h/mcw+FmUXcyXz2/UMUx9h7GPtI/6pFHMVx3QWeKnpxQaJSt55yV8R7NYNAB/BE4aOWuClGi3S7lrO/QzlyxglMGsdSkVjDC7pvDPLPpOzOuSxTu+5wVb+62LEMMT/fL5IMsEiCdMqtesei4NMCNMWLV3hd+C/mVGlM95+0h8DzmVriADqf4es9viVfogn17PP6noMvHzZ3JKwdSXyzXoHR8KWJKjDlfJN7X26Cw0MshBH5n/ae1lFRu7m3Z1OMMesyu/MfGJWVoceAz7IL7nwd8We/QDsCFC5JiE/qb1+diVM+ag117mKIIDMZTvi36b23OAH7ych+yp4IOVkt9hCdw9v5wMA0TLiLO8lcK1xFQn2Hx3Bp7KWf0opU+OitwdL3BaBomT52CVmpiFaOWqY8XPltAStd7bjhgh4MyTdJJ0hC21jBQo1z+0WKV9mNF0YksNpWnKDgKseMpEckKt5+y6WHMDZlQwQDyrNCuBQK+snBdb6kOgNa60m3G5l3CIg5a8eEuU7B0ERQ+jGvTjfk9bxrQLRPhYDL37vdSPWjzLeVyl6uR8bZMwVZPcUYZ4ML5fkA++JQ7CKP0PAsgy7asP1QsTf8KEmUZh/jb0/HPhMzfXHOgBPAgTkclxVWbHyO/a+QhwxWxjJ6TVDiSv2CTRQ3Mjko1n3lc46BqNainHPJRcLtq6v1PAJ9OrkTbiRkmwGuN1lNfYdIY5VQas+O47Tcfxt9a1GePkybwuyRwD0OB9kaggrNRbHEcX1e0kZkCFUhfyuoh5aqdFQAgZG6su/gSNWAzNBUku490llgHbsaSGZNiR21jJTXSksEO3yRPF/aZwJZ6k4jvx6fsDLvOXWRunJqUSPQDV36Rj1Xq7TgL+CU9D3dLxHerirmg6iegS7A5FEz98jWVplkL1RYgkSMiHKpITXHKwmPHLFYXl26CLzBPcoaQQZl2nMMu+c4nnvsQ7Z4+frbeSR+PBOs+o4OKhkEtYaTnsFaetr4otsLWxb90OIRzD7xEhcW8/FBWXFKi0idji7BBwXBcO64Sjmy29GDzpsaT0ztgwkKs0pAXUD/M8O73umRxQTsFg51Z25Gd+qSHBQa6YEYqX12R4wpgSTqQJBrItFk0MK1yhFfhaztOJXTsnWowNpuMc81DQfulM3ewtz0cR4aLwlDQfSnfrM/M6IeI+LC5+ZBgVV7pEdM3Zznnwe78Sel5zg/SsuLPBUcHwKXQ1yV5gHYSFdCWqBu3U0lGDjWt2k/STeocqoyGMfVFTdKVNV+KngCJtYaBXwQOBQ8fEBfy6oPVL5XHDqEraW5ox0gXE/IJe0iOfP78HOqDcCzNMUuxQ4kYP6RkAh/HSrSCtHwawfrKTNl2xzK5LKf6O5zUpIw9zleMBq9KErlGe1sQCaukx3fu1zoWSnkz5UMjMpo01rfe1T9DxwtMiz84P4s2SV9DKhfbEijhW2WbM6gdV1qMerT4QCw== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 45b62dd9-fb8e-4d24-9c67-08da0b278591 X-MS-Exchange-CrossTenant-AuthSource: PH0PR13MB4842.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Mar 2022 10:42:40.1076 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: a/zpFZKPIqnXdSV1R5MRbIEMk081dcflxOOwn+sGKmQ+J1UwcjVtjULosCwILhIMGbjl6/RefBDSfcI1IyRdzidG3nkb4ZpVMrvCKE4yVtw= X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR13MB1512 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Yinjun Zhang Due to the different definition of txbuf in NFDK comparing to NFD3, there're no pre-allocated txbufs for xdp use in NFDK's implementation, we just use the existed rxbuf and recycle it when xdp tx is completed. For each packet to transmit in xdp path, we cannot use more than `NFDK_TX_DESC_PER_SIMPLE_PKT` txbufs, one is to stash virtual address, and another is for dma address, so currently the amount of transmitted bytes is not accumulated. Also we borrow the last bit of virtual addr to indicate a new transmitted packet due to address's alignment attribution. Signed-off-by: Yinjun Zhang Signed-off-by: Fei Qin Signed-off-by: Simon Horman --- drivers/net/ethernet/netronome/nfp/nfdk/dp.c | 196 +++++++++++++++++- .../net/ethernet/netronome/nfp/nfdk/nfdk.h | 17 ++ 2 files changed, 208 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c index f03de6b7988b..e3da9ac20e57 100644 --- a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c @@ -541,11 +541,6 @@ static void nfp_nfdk_tx_complete(struct nfp_net_tx_ring *tx_ring, int budget) tx_ring->rd_p, tx_ring->wr_p, tx_ring->cnt); } -static bool nfp_nfdk_xdp_complete(struct nfp_net_tx_ring *tx_ring) -{ - return true; -} - /* Receive processing */ static void * nfp_nfdk_napi_alloc_one(struct nfp_net_dp *dp, dma_addr_t *dma_addr) @@ -791,6 +786,185 @@ nfp_nfdk_rx_drop(const struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec, dev_kfree_skb_any(skb); } +static bool nfp_nfdk_xdp_complete(struct nfp_net_tx_ring *tx_ring) +{ + struct nfp_net_r_vector *r_vec = tx_ring->r_vec; + struct nfp_net_dp *dp = &r_vec->nfp_net->dp; + struct nfp_net_rx_ring *rx_ring; + u32 qcp_rd_p, done = 0; + bool done_all; + int todo; + + /* Work out how many descriptors have been transmitted */ + qcp_rd_p = nfp_net_read_tx_cmpl(tx_ring, dp); + if (qcp_rd_p == tx_ring->qcp_rd_p) + return true; + + todo = D_IDX(tx_ring, qcp_rd_p - tx_ring->qcp_rd_p); + + done_all = todo <= NFP_NET_XDP_MAX_COMPLETE; + todo = min(todo, NFP_NET_XDP_MAX_COMPLETE); + + rx_ring = r_vec->rx_ring; + while (todo > 0) { + int idx = D_IDX(tx_ring, tx_ring->rd_p + done); + struct nfp_nfdk_tx_buf *txbuf; + unsigned int step = 1; + + txbuf = &tx_ring->ktxbufs[idx]; + if (!txbuf->raw) + goto next; + + if (NFDK_TX_BUF_INFO(txbuf->val) != NFDK_TX_BUF_INFO_SOP) { + WARN_ONCE(1, "Unexpected TX buffer in XDP TX ring\n"); + goto next; + } + + /* Two successive txbufs are used to stash virtual and dma + * address respectively, recycle and clean them here. + */ + nfp_nfdk_rx_give_one(dp, rx_ring, + (void *)NFDK_TX_BUF_PTR(txbuf[0].val), + txbuf[1].dma_addr); + txbuf[0].raw = 0; + txbuf[1].raw = 0; + step = 2; + + u64_stats_update_begin(&r_vec->tx_sync); + /* Note: tx_bytes not accumulated. */ + r_vec->tx_pkts++; + u64_stats_update_end(&r_vec->tx_sync); +next: + todo -= step; + done += step; + } + + tx_ring->qcp_rd_p = D_IDX(tx_ring, tx_ring->qcp_rd_p + done); + tx_ring->rd_p += done; + + WARN_ONCE(tx_ring->wr_p - tx_ring->rd_p > tx_ring->cnt, + "XDP TX ring corruption rd_p=%u wr_p=%u cnt=%u\n", + tx_ring->rd_p, tx_ring->wr_p, tx_ring->cnt); + + return done_all; +} + +static bool +nfp_nfdk_tx_xdp_buf(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring, + struct nfp_net_tx_ring *tx_ring, + struct nfp_net_rx_buf *rxbuf, unsigned int dma_off, + unsigned int pkt_len, bool *completed) +{ + unsigned int dma_map_sz = dp->fl_bufsz - NFP_NET_RX_BUF_NON_DATA; + unsigned int dma_len, type, cnt, dlen_type, tmp_dlen; + struct nfp_nfdk_tx_buf *txbuf; + struct nfp_nfdk_tx_desc *txd; + unsigned int n_descs; + dma_addr_t dma_addr; + int wr_idx; + + /* Reject if xdp_adjust_tail grow packet beyond DMA area */ + if (pkt_len + dma_off > dma_map_sz) + return false; + + /* Make sure there's still at least one block available after + * aligning to block boundary, so that the txds used below + * won't wrap around the tx_ring. + */ + if (unlikely(nfp_net_tx_full(tx_ring, NFDK_TX_DESC_STOP_CNT))) { + if (!*completed) { + nfp_nfdk_xdp_complete(tx_ring); + *completed = true; + } + + if (unlikely(nfp_net_tx_full(tx_ring, NFDK_TX_DESC_STOP_CNT))) { + nfp_nfdk_rx_drop(dp, rx_ring->r_vec, rx_ring, rxbuf, + NULL); + return false; + } + } + + /* Check if cross block boundary */ + n_descs = nfp_nfdk_headlen_to_segs(pkt_len); + if ((round_down(tx_ring->wr_p, NFDK_TX_DESC_BLOCK_CNT) != + round_down(tx_ring->wr_p + n_descs, NFDK_TX_DESC_BLOCK_CNT)) || + ((u32)tx_ring->data_pending + pkt_len > + NFDK_TX_MAX_DATA_PER_BLOCK)) { + unsigned int nop_slots = D_BLOCK_CPL(tx_ring->wr_p); + + wr_idx = D_IDX(tx_ring, tx_ring->wr_p); + txd = &tx_ring->ktxds[wr_idx]; + memset(txd, 0, + array_size(nop_slots, sizeof(struct nfp_nfdk_tx_desc))); + + tx_ring->data_pending = 0; + tx_ring->wr_p += nop_slots; + tx_ring->wr_ptr_add += nop_slots; + } + + wr_idx = D_IDX(tx_ring, tx_ring->wr_p); + + txbuf = &tx_ring->ktxbufs[wr_idx]; + + txbuf[0].val = (unsigned long)rxbuf->frag | NFDK_TX_BUF_INFO_SOP; + txbuf[1].dma_addr = rxbuf->dma_addr; + /* Note: pkt len not stored */ + + dma_sync_single_for_device(dp->dev, rxbuf->dma_addr + dma_off, + pkt_len, DMA_BIDIRECTIONAL); + + /* Build TX descriptor */ + txd = &tx_ring->ktxds[wr_idx]; + dma_len = pkt_len; + dma_addr = rxbuf->dma_addr + dma_off; + + if (dma_len < NFDK_TX_MAX_DATA_PER_HEAD) + type = NFDK_DESC_TX_TYPE_SIMPLE; + else + type = NFDK_DESC_TX_TYPE_GATHER; + + /* FIELD_PREP() implicitly truncates to chunk */ + dma_len -= 1; + dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN_HEAD, dma_len) | + FIELD_PREP(NFDK_DESC_TX_TYPE_HEAD, type); + + txd->dma_len_type = cpu_to_le16(dlen_type); + nfp_desc_set_dma_addr(txd, dma_addr); + + tmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD; + dma_len -= tmp_dlen; + dma_addr += tmp_dlen + 1; + txd++; + + while (dma_len > 0) { + dma_len -= 1; + dlen_type = FIELD_PREP(NFDK_DESC_TX_DMA_LEN, dma_len); + txd->dma_len_type = cpu_to_le16(dlen_type); + nfp_desc_set_dma_addr(txd, dma_addr); + + dlen_type &= NFDK_DESC_TX_DMA_LEN; + dma_len -= dlen_type; + dma_addr += dlen_type + 1; + txd++; + } + + (txd - 1)->dma_len_type = cpu_to_le16(dlen_type | NFDK_DESC_TX_EOP); + + /* Metadata desc */ + txd->raw = 0; + txd++; + + cnt = txd - tx_ring->ktxds - wr_idx; + tx_ring->wr_p += cnt; + if (tx_ring->wr_p % NFDK_TX_DESC_BLOCK_CNT) + tx_ring->data_pending += pkt_len; + else + tx_ring->data_pending = 0; + + tx_ring->wr_ptr_add += cnt; + return true; +} + /** * nfp_nfdk_rx() - receive up to @budget packets on @rx_ring * @rx_ring: RX ring to receive from @@ -903,6 +1077,7 @@ static int nfp_nfdk_rx(struct nfp_net_rx_ring *rx_ring, int budget) if (xdp_prog && !meta.portid) { void *orig_data = rxbuf->frag + pkt_off; + unsigned int dma_off; int act; xdp_prepare_buff(&xdp, @@ -919,6 +1094,17 @@ static int nfp_nfdk_rx(struct nfp_net_rx_ring *rx_ring, int budget) case XDP_PASS: meta_len_xdp = xdp.data - xdp.data_meta; break; + case XDP_TX: + dma_off = pkt_off - NFP_NET_RX_BUF_HEADROOM; + if (unlikely(!nfp_nfdk_tx_xdp_buf(dp, rx_ring, + tx_ring, + rxbuf, + dma_off, + pkt_len, + &xdp_tx_cmpl))) + trace_xdp_exception(dp->netdev, + xdp_prog, act); + continue; default: bpf_warn_invalid_xdp_action(dp->netdev, xdp_prog, act); fallthrough; diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h index 5107c4f03feb..c41e0975eb73 100644 --- a/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h +++ b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h @@ -71,12 +71,29 @@ struct nfp_nfdk_tx_desc { }; }; +/* The device don't make use of the 2 or 3 least significant bits of the address + * due to alignment constraints. The driver can make use of those bits to carry + * information about the buffer before giving it to the device. + * + * NOTE: The driver must clear the lower bits before handing the buffer to the + * device. + * + * - NFDK_TX_BUF_INFO_SOP - Start of a packet + * Mark the buffer as a start of a packet. This is used in the XDP TX process + * to stash virtual and DMA address so that they can be recycled when the TX + * operation is completed. + */ +#define NFDK_TX_BUF_PTR(val) ((val) & ~(sizeof(void *) - 1)) +#define NFDK_TX_BUF_INFO(val) ((val) & (sizeof(void *) - 1)) +#define NFDK_TX_BUF_INFO_SOP BIT(0) + struct nfp_nfdk_tx_buf { union { /* First slot */ union { struct sk_buff *skb; void *frag; + unsigned long val; }; /* 1 + nr_frags next slots */