From patchwork Thu Nov 26 21:20:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kiyanovski, Arthur" X-Patchwork-Id: 11934783 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA582C63798 for ; Thu, 26 Nov 2020 21:20:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6B7F721D93 for ; Thu, 26 Nov 2020 21:20:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="g7br0anK" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732947AbgKZVUg (ORCPT ); Thu, 26 Nov 2020 16:20:36 -0500 Received: from smtp-fw-6002.amazon.com ([52.95.49.90]:12207 "EHLO smtp-fw-6002.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732581AbgKZVUg (ORCPT ); Thu, 26 Nov 2020 16:20:36 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1606425635; x=1637961635; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=CVx90vVKm5SWSznoP7xoV/Cbln6faGZ29j4a7RHkHh4=; b=g7br0anKVyt43PJneh3gVYPTuXQ6zRty/UNRnnpS5wiPq4NB4qVDN1ii aKpTyyIFgkF9TY78cWdrJK68ea5mUS+qKvedZQEz2v4SBKCVKoJAD4Vwi Vc2pBDMphesGjXtPgHzLbrVWJgBwmOlVfjj0Wv4cThsW8/8wdLzi+iAj4 o=; X-IronPort-AV: E=Sophos;i="5.78,373,1599523200"; d="scan'208";a="67544111" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-2b-baacba05.us-west-2.amazon.com) ([10.43.8.2]) by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 26 Nov 2020 21:20:29 +0000 Received: from EX13MTAUEE002.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194]) by email-inbound-relay-2b-baacba05.us-west-2.amazon.com (Postfix) with ESMTPS id 36909A082B; Thu, 26 Nov 2020 21:20:28 +0000 (UTC) Received: from EX13D08UEE002.ant.amazon.com (10.43.62.92) by EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:27 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D08UEE002.ant.amazon.com (10.43.62.92) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:27 +0000 Received: from HFA15-G63729NC.amazon.com (10.1.213.20) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Nov 2020 21:20:24 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , , , Subject: [RFC PATCH V2 net-next 1/9] net: ena: use constant value for net_device allocation Date: Thu, 26 Nov 2020 23:20:09 +0200 Message-ID: <1606425617-13112-2-git-send-email-akiyano@amazon.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606425617-13112-1-git-send-email-akiyano@amazon.com> References: <1606425617-13112-1-git-send-email-akiyano@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Arthur Kiyanovski The patch changes the maximum number of RX/TX queues it advertises to the kernel (via alloc_etherdev_mq()) from a value received from the device to a constant value which is the minimum between 128 and the number of CPUs in the system. By allocating the net_device struct with a constant number of queues, the driver is able to allocate it at a much earlier stage, before calling any ena_com functions. This would allow to make all log prints in ena_com to use netdev_* log functions instead or current pr_* ones. Signed-off-by: Shay Agroskin Signed-off-by: Arthur Kiyanovski --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 46 ++++++++++---------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index df1884d57d1a..985dea1870b5 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -29,6 +29,8 @@ MODULE_LICENSE("GPL"); /* Time in jiffies before concluding the transmitter is hung. */ #define TX_TIMEOUT (5 * HZ) +#define ENA_MAX_RINGS min_t(unsigned int, ENA_MAX_NUM_IO_QUEUES, num_possible_cpus()) + #define ENA_NAPI_BUDGET 64 #define DEFAULT_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_IFUP | \ @@ -4176,18 +4178,34 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) ena_dev->dmadev = &pdev->dev; + netdev = alloc_etherdev_mq(sizeof(struct ena_adapter), ENA_MAX_RINGS); + if (!netdev) { + dev_err(&pdev->dev, "alloc_etherdev_mq failed\n"); + rc = -ENOMEM; + goto err_free_region; + } + + SET_NETDEV_DEV(netdev, &pdev->dev); + adapter = netdev_priv(netdev); + adapter->ena_dev = ena_dev; + adapter->netdev = netdev; + adapter->pdev = pdev; + adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); + + pci_set_drvdata(pdev, adapter); + rc = ena_device_init(ena_dev, pdev, &get_feat_ctx, &wd_state); if (rc) { dev_err(&pdev->dev, "ENA device init failed\n"); if (rc == -ETIME) rc = -EPROBE_DEFER; - goto err_free_region; + goto err_netdev_destroy; } rc = ena_map_llq_mem_bar(pdev, ena_dev, bars); if (rc) { dev_err(&pdev->dev, "ENA llq bar mapping failed\n"); - goto err_free_ena_dev; + goto err_device_destroy; } calc_queue_ctx.ena_dev = ena_dev; @@ -4207,26 +4225,8 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) goto err_device_destroy; } - /* dev zeroed in init_etherdev */ - netdev = alloc_etherdev_mq(sizeof(struct ena_adapter), max_num_io_queues); - if (!netdev) { - dev_err(&pdev->dev, "alloc_etherdev_mq failed\n"); - rc = -ENOMEM; - goto err_device_destroy; - } - - SET_NETDEV_DEV(netdev, &pdev->dev); - - adapter = netdev_priv(netdev); - pci_set_drvdata(pdev, adapter); - - adapter->ena_dev = ena_dev; - adapter->netdev = netdev; - adapter->pdev = pdev; - ena_set_conf_feat_params(adapter, &get_feat_ctx); - adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); adapter->reset_reason = ENA_REGS_RESET_NORMAL; adapter->requested_tx_ring_size = calc_queue_ctx.tx_queue_size; @@ -4257,7 +4257,7 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) if (rc) { dev_err(&pdev->dev, "Failed to query interrupt moderation feature\n"); - goto err_netdev_destroy; + goto err_device_destroy; } ena_init_io_rings(adapter, 0, @@ -4335,11 +4335,11 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) ena_disable_msix(adapter); err_worker_destroy: del_timer(&adapter->timer_service); -err_netdev_destroy: - free_netdev(netdev); err_device_destroy: ena_com_delete_host_info(ena_dev); ena_com_admin_destroy(ena_dev); +err_netdev_destroy: + free_netdev(netdev); err_free_region: ena_release_bars(ena_dev, pdev); err_free_ena_dev: From patchwork Thu Nov 26 21:20:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kiyanovski, Arthur" X-Patchwork-Id: 11934789 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2ABBAC6379D for ; Thu, 26 Nov 2020 21:20:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A9D7D21D1A for ; Thu, 26 Nov 2020 21:20:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="N2r6GVY+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733143AbgKZVUk (ORCPT ); Thu, 26 Nov 2020 16:20:40 -0500 Received: from smtp-fw-9102.amazon.com ([207.171.184.29]:6324 "EHLO smtp-fw-9102.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726178AbgKZVUk (ORCPT ); Thu, 26 Nov 2020 16:20:40 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1606425633; x=1637961633; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=3pFaL3GWP1RM0UxQyy6HKJ0MorzuUeHDGw9KAjFiIVw=; b=N2r6GVY+IFOl4VE/1fLUHWB3vultRZ53Y36ocPBhx9dNVAnXjTtPRtif VxfMrO1DE2ZOU/ady3nCm4vEv6axS4N22s2IMODvx62yrA847OGQBXBc8 2v2hWw2QEUVv2oFp8tjQ7osEwHapM++dCOJky2LBnlCwgLI4Kxe2PSB/f k=; X-IronPort-AV: E=Sophos;i="5.78,373,1599523200"; d="scan'208";a="99555476" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 26 Nov 2020 21:20:32 +0000 Received: from EX13MTAUEE002.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com (Postfix) with ESMTPS id 0BAEBA07B7; Thu, 26 Nov 2020 21:20:31 +0000 (UTC) Received: from EX13D08UEE001.ant.amazon.com (10.43.62.126) by EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:31 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D08UEE001.ant.amazon.com (10.43.62.126) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:31 +0000 Received: from HFA15-G63729NC.amazon.com (10.1.213.20) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Nov 2020 21:20:28 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , , , , Amit Bernstein Subject: [RFC PATCH V2 net-next 2/9] net: ena: add device distinct log prefix to files Date: Thu, 26 Nov 2020 23:20:10 +0200 Message-ID: <1606425617-13112-3-git-send-email-akiyano@amazon.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606425617-13112-1-git-send-email-akiyano@amazon.com> References: <1606425617-13112-1-git-send-email-akiyano@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Arthur Kiyanovski ENA logs are adjusted to display the full ENA representation to distinct each ENA device in case of multiple interfaces. Using netdev_err/warn and dev_info functions for logging provides uniform printing with clear distinction of the device and interface. This patch changes all printing in ena_com files to use netev_* logging functions except for messages of info level. Log functions of that level would be printed with dev_info because of the early stage they are called in when net_device struct isn't yet registered. To allow using netdev_* functions in all ena_com functions, a pointer to the net_device was added to ena_com_dev struct. The patch also adds some log messages to make driver debugging easier. Signed-off-by: Amit Bernstein Signed-off-by: Shay Agroskin Signed-off-by: Arthur Kiyanovski --- drivers/net/ethernet/amazon/ena/ena_com.c | 384 +++++++++++------- drivers/net/ethernet/amazon/ena/ena_com.h | 21 + drivers/net/ethernet/amazon/ena/ena_eth_com.c | 66 +-- drivers/net/ethernet/amazon/ena/ena_eth_com.h | 23 +- drivers/net/ethernet/amazon/ena/ena_netdev.c | 2 + 5 files changed, 312 insertions(+), 184 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c index 5f8769aa469d..e168edf3c930 100644 --- a/drivers/net/ethernet/amazon/ena/ena_com.c +++ b/drivers/net/ethernet/amazon/ena/ena_com.c @@ -71,7 +71,8 @@ static int ena_com_mem_addr_set(struct ena_com_dev *ena_dev, dma_addr_t addr) { if ((addr & GENMASK_ULL(ena_dev->dma_addr_bits - 1, 0)) != addr) { - pr_err("DMA address has more bits that the device supports\n"); + netdev_err(ena_dev->net_device, + "DMA address has more bits that the device supports\n"); return -EINVAL; } @@ -83,6 +84,7 @@ static int ena_com_mem_addr_set(struct ena_com_dev *ena_dev, static int ena_com_admin_init_sq(struct ena_com_admin_queue *admin_queue) { + struct ena_com_dev *ena_dev = admin_queue->ena_dev; struct ena_com_admin_sq *sq = &admin_queue->sq; u16 size = ADMIN_SQ_SIZE(admin_queue->q_depth); @@ -90,7 +92,7 @@ static int ena_com_admin_init_sq(struct ena_com_admin_queue *admin_queue) &sq->dma_addr, GFP_KERNEL); if (!sq->entries) { - pr_err("Memory allocation failed\n"); + netdev_err(ena_dev->net_device, "Memory allocation failed\n"); return -ENOMEM; } @@ -105,6 +107,7 @@ static int ena_com_admin_init_sq(struct ena_com_admin_queue *admin_queue) static int ena_com_admin_init_cq(struct ena_com_admin_queue *admin_queue) { + struct ena_com_dev *ena_dev = admin_queue->ena_dev; struct ena_com_admin_cq *cq = &admin_queue->cq; u16 size = ADMIN_CQ_SIZE(admin_queue->q_depth); @@ -112,7 +115,7 @@ static int ena_com_admin_init_cq(struct ena_com_admin_queue *admin_queue) &cq->dma_addr, GFP_KERNEL); if (!cq->entries) { - pr_err("Memory allocation failed\n"); + netdev_err(ena_dev->net_device, "Memory allocation failed\n"); return -ENOMEM; } @@ -135,7 +138,7 @@ static int ena_com_admin_init_aenq(struct ena_com_dev *ena_dev, &aenq->dma_addr, GFP_KERNEL); if (!aenq->entries) { - pr_err("Memory allocation failed\n"); + netdev_err(ena_dev->net_device, "Memory allocation failed\n"); return -ENOMEM; } @@ -156,7 +159,8 @@ static int ena_com_admin_init_aenq(struct ena_com_dev *ena_dev, writel(aenq_caps, ena_dev->reg_bar + ENA_REGS_AENQ_CAPS_OFF); if (unlikely(!aenq_handlers)) { - pr_err("AENQ handlers pointer is NULL\n"); + netdev_err(ena_dev->net_device, + "AENQ handlers pointer is NULL\n"); return -EINVAL; } @@ -176,18 +180,21 @@ static struct ena_comp_ctx *get_comp_ctxt(struct ena_com_admin_queue *admin_queu u16 command_id, bool capture) { if (unlikely(command_id >= admin_queue->q_depth)) { - pr_err("Command id is larger than the queue size. cmd_id: %u queue size %d\n", - command_id, admin_queue->q_depth); + netdev_err(admin_queue->ena_dev->net_device, + "Command id is larger than the queue size. cmd_id: %u queue size %d\n", + command_id, admin_queue->q_depth); return NULL; } if (unlikely(!admin_queue->comp_ctx)) { - pr_err("Completion context is NULL\n"); + netdev_err(admin_queue->ena_dev->net_device, + "Completion context is NULL\n"); return NULL; } if (unlikely(admin_queue->comp_ctx[command_id].occupied && capture)) { - pr_err("Completion context is occupied\n"); + netdev_err(admin_queue->ena_dev->net_device, + "Completion context is occupied\n"); return NULL; } @@ -217,7 +224,8 @@ static struct ena_comp_ctx *__ena_com_submit_admin_cmd(struct ena_com_admin_queu /* In case of queue FULL */ cnt = (u16)atomic_read(&admin_queue->outstanding_cmds); if (cnt >= admin_queue->q_depth) { - pr_debug("Admin queue is full.\n"); + netdev_dbg(admin_queue->ena_dev->net_device, + "Admin queue is full.\n"); admin_queue->stats.out_of_space++; return ERR_PTR(-ENOSPC); } @@ -259,6 +267,7 @@ static struct ena_comp_ctx *__ena_com_submit_admin_cmd(struct ena_com_admin_queu static int ena_com_init_comp_ctxt(struct ena_com_admin_queue *admin_queue) { + struct ena_com_dev *ena_dev = admin_queue->ena_dev; size_t size = admin_queue->q_depth * sizeof(struct ena_comp_ctx); struct ena_comp_ctx *comp_ctx; u16 i; @@ -266,7 +275,7 @@ static int ena_com_init_comp_ctxt(struct ena_com_admin_queue *admin_queue) admin_queue->comp_ctx = devm_kzalloc(admin_queue->q_dmadev, size, GFP_KERNEL); if (unlikely(!admin_queue->comp_ctx)) { - pr_err("Memory allocation failed\n"); + netdev_err(ena_dev->net_device, "Memory allocation failed\n"); return -ENOMEM; } @@ -337,7 +346,8 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev, } if (!io_sq->desc_addr.virt_addr) { - pr_err("Memory allocation failed\n"); + netdev_err(ena_dev->net_device, + "Memory allocation failed\n"); return -ENOMEM; } } @@ -363,7 +373,8 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev, devm_kzalloc(ena_dev->dmadev, size, GFP_KERNEL); if (!io_sq->bounce_buf_ctrl.base_buffer) { - pr_err("Bounce buffer memory allocation failed\n"); + netdev_err(ena_dev->net_device, + "Bounce buffer memory allocation failed\n"); return -ENOMEM; } @@ -423,7 +434,7 @@ static int ena_com_init_io_cq(struct ena_com_dev *ena_dev, } if (!io_cq->cdesc_addr.virt_addr) { - pr_err("Memory allocation failed\n"); + netdev_err(ena_dev->net_device, "Memory allocation failed\n"); return -ENOMEM; } @@ -444,7 +455,8 @@ static void ena_com_handle_single_admin_completion(struct ena_com_admin_queue *a comp_ctx = get_comp_ctxt(admin_queue, cmd_id, false); if (unlikely(!comp_ctx)) { - pr_err("comp_ctx is NULL. Changing the admin queue running state\n"); + netdev_err(admin_queue->ena_dev->net_device, + "comp_ctx is NULL. Changing the admin queue running state\n"); admin_queue->running_state = false; return; } @@ -496,10 +508,12 @@ static void ena_com_handle_admin_completion(struct ena_com_admin_queue *admin_qu admin_queue->stats.completed_cmd += comp_num; } -static int ena_com_comp_status_to_errno(u8 comp_status) +static int ena_com_comp_status_to_errno(struct ena_com_admin_queue *admin_queue, + u8 comp_status) { if (unlikely(comp_status != 0)) - pr_err("Admin command failed[%u]\n", comp_status); + netdev_err(admin_queue->ena_dev->net_device, + "Admin command failed[%u]\n", comp_status); switch (comp_status) { case ENA_ADMIN_SUCCESS: @@ -546,7 +560,8 @@ static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_c break; if (time_is_before_jiffies(timeout)) { - pr_err("Wait for completion (polling) timeout\n"); + netdev_err(admin_queue->ena_dev->net_device, + "Wait for completion (polling) timeout\n"); /* ENA didn't have any completion */ spin_lock_irqsave(&admin_queue->q_lock, flags); admin_queue->stats.no_completion++; @@ -562,7 +577,8 @@ static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_c } if (unlikely(comp_ctx->status == ENA_CMD_ABORTED)) { - pr_err("Command was aborted\n"); + netdev_err(admin_queue->ena_dev->net_device, + "Command was aborted\n"); spin_lock_irqsave(&admin_queue->q_lock, flags); admin_queue->stats.aborted_cmd++; spin_unlock_irqrestore(&admin_queue->q_lock, flags); @@ -573,7 +589,7 @@ static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_c WARN(comp_ctx->status != ENA_CMD_COMPLETED, "Invalid comp status %d\n", comp_ctx->status); - ret = ena_com_comp_status_to_errno(comp_ctx->comp_status); + ret = ena_com_comp_status_to_errno(admin_queue, comp_ctx->comp_status); err: comp_ctxt_release(admin_queue, comp_ctx); return ret; @@ -615,7 +631,8 @@ static int ena_com_set_llq(struct ena_com_dev *ena_dev) sizeof(resp)); if (unlikely(ret)) - pr_err("Failed to set LLQ configurations: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to set LLQ configurations: %d\n", ret); return ret; } @@ -637,8 +654,9 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev, llq_info->header_location_ctrl = llq_default_cfg->llq_header_location; } else { - pr_err("Invalid header location control, supported: 0x%x\n", - supported_feat); + netdev_err(ena_dev->net_device, + "Invalid header location control, supported: 0x%x\n", + supported_feat); return -EINVAL; } @@ -652,14 +670,16 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev, } else if (supported_feat & ENA_ADMIN_SINGLE_DESC_PER_ENTRY) { llq_info->desc_stride_ctrl = ENA_ADMIN_SINGLE_DESC_PER_ENTRY; } else { - pr_err("Invalid desc_stride_ctrl, supported: 0x%x\n", - supported_feat); + netdev_err(ena_dev->net_device, + "Invalid desc_stride_ctrl, supported: 0x%x\n", + supported_feat); return -EINVAL; } - pr_err("Default llq stride ctrl is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", - llq_default_cfg->llq_stride_ctrl, supported_feat, - llq_info->desc_stride_ctrl); + netdev_err(ena_dev->net_device, + "Default llq stride ctrl is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", + llq_default_cfg->llq_stride_ctrl, + supported_feat, llq_info->desc_stride_ctrl); } } else { llq_info->desc_stride_ctrl = 0; @@ -680,20 +700,23 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev, llq_info->desc_list_entry_size_ctrl = ENA_ADMIN_LIST_ENTRY_SIZE_256B; llq_info->desc_list_entry_size = 256; } else { - pr_err("Invalid entry_size_ctrl, supported: 0x%x\n", - supported_feat); + netdev_err(ena_dev->net_device, + "Invalid entry_size_ctrl, supported: 0x%x\n", + supported_feat); return -EINVAL; } - pr_err("Default llq ring entry size is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", - llq_default_cfg->llq_ring_entry_size, supported_feat, - llq_info->desc_list_entry_size); + netdev_err(ena_dev->net_device, + "Default llq ring entry size is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", + llq_default_cfg->llq_ring_entry_size, supported_feat, + llq_info->desc_list_entry_size); } if (unlikely(llq_info->desc_list_entry_size & 0x7)) { /* The desc list entry size should be whole multiply of 8 * This requirement comes from __iowrite64_copy() */ - pr_err("Illegal entry size %d\n", llq_info->desc_list_entry_size); + netdev_err(ena_dev->net_device, "Illegal entry size %d\n", + llq_info->desc_list_entry_size); return -EINVAL; } @@ -716,14 +739,16 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev, } else if (supported_feat & ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_8) { llq_info->descs_num_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_8; } else { - pr_err("Invalid descs_num_before_header, supported: 0x%x\n", - supported_feat); + netdev_err(ena_dev->net_device, + "Invalid descs_num_before_header, supported: 0x%x\n", + supported_feat); return -EINVAL; } - pr_err("Default llq num descs before header is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", - llq_default_cfg->llq_num_decs_before_header, - supported_feat, llq_info->descs_num_before_header); + netdev_err(ena_dev->net_device, + "Default llq num descs before header is not supported, performing fallback, default: 0x%x, supported: 0x%x, used: 0x%x\n", + llq_default_cfg->llq_num_decs_before_header, + supported_feat, llq_info->descs_num_before_header); } /* Check for accelerated queue supported */ llq_accel_mode_get = llq_features->accel_mode.u.get; @@ -739,7 +764,8 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev, rc = ena_com_set_llq(ena_dev); if (rc) - pr_err("Cannot set LLQ configuration: %d\n", rc); + netdev_err(ena_dev->net_device, + "Cannot set LLQ configuration: %d\n", rc); return rc; } @@ -766,15 +792,17 @@ static int ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx *com spin_unlock_irqrestore(&admin_queue->q_lock, flags); if (comp_ctx->status == ENA_CMD_COMPLETED) { - pr_err("The ena device sent a completion but the driver didn't receive a MSI-X interrupt (cmd %d), autopolling mode is %s\n", - comp_ctx->cmd_opcode, - admin_queue->auto_polling ? "ON" : "OFF"); + netdev_err(admin_queue->ena_dev->net_device, + "The ena device sent a completion but the driver didn't receive a MSI-X interrupt (cmd %d), autopolling mode is %s\n", + comp_ctx->cmd_opcode, + admin_queue->auto_polling ? "ON" : "OFF"); /* Check if fallback to polling is enabled */ if (admin_queue->auto_polling) admin_queue->polling = true; } else { - pr_err("The ena device didn't send a completion for the admin cmd %d status %d\n", - comp_ctx->cmd_opcode, comp_ctx->status); + netdev_err(admin_queue->ena_dev->net_device, + "The ena device didn't send a completion for the admin cmd %d status %d\n", + comp_ctx->cmd_opcode, comp_ctx->status); } /* Check if shifted to polling mode. * This will happen if there is a completion without an interrupt @@ -787,7 +815,7 @@ static int ena_com_wait_and_process_admin_cq_interrupts(struct ena_comp_ctx *com } } - ret = ena_com_comp_status_to_errno(comp_ctx->comp_status); + ret = ena_com_comp_status_to_errno(admin_queue, comp_ctx->comp_status); err: comp_ctxt_release(admin_queue, comp_ctx); return ret; @@ -834,15 +862,17 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset) } if (unlikely(i == timeout)) { - pr_err("Reading reg failed for timeout. expected: req id[%hu] offset[%hu] actual: req id[%hu] offset[%hu]\n", - mmio_read->seq_num, offset, read_resp->req_id, - read_resp->reg_off); + netdev_err(ena_dev->net_device, + "Reading reg failed for timeout. expected: req id[%hu] offset[%hu] actual: req id[%hu] offset[%hu]\n", + mmio_read->seq_num, offset, read_resp->req_id, + read_resp->reg_off); ret = ENA_MMIO_READ_TIMEOUT; goto err; } if (read_resp->reg_off != offset) { - pr_err("Read failure: wrong offset provided\n"); + netdev_err(ena_dev->net_device, + "Read failure: wrong offset provided\n"); ret = ENA_MMIO_READ_TIMEOUT; } else { ret = read_resp->reg_val; @@ -901,7 +931,8 @@ static int ena_com_destroy_io_sq(struct ena_com_dev *ena_dev, sizeof(destroy_resp)); if (unlikely(ret && (ret != -ENODEV))) - pr_err("Failed to destroy io sq error: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to destroy io sq error: %d\n", ret); return ret; } @@ -951,7 +982,8 @@ static int wait_for_reset_state(struct ena_com_dev *ena_dev, u32 timeout, val = ena_com_reg_bar_read32(ena_dev, ENA_REGS_DEV_STS_OFF); if (unlikely(val == ENA_MMIO_READ_TIMEOUT)) { - pr_err("Reg read timeout occurred\n"); + netdev_err(ena_dev->net_device, + "Reg read timeout occurred\n"); return -ETIME; } @@ -991,7 +1023,8 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev, int ret; if (!ena_com_check_supported_feature_id(ena_dev, feature_id)) { - pr_debug("Feature %d isn't supported\n", feature_id); + netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n", + feature_id); return -EOPNOTSUPP; } @@ -1010,7 +1043,7 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev, &get_cmd.control_buffer.address, control_buf_dma_addr); if (unlikely(ret)) { - pr_err("Memory address set failed\n"); + netdev_err(ena_dev->net_device, "Memory address set failed\n"); return ret; } @@ -1027,8 +1060,9 @@ static int ena_com_get_feature_ex(struct ena_com_dev *ena_dev, sizeof(*get_resp)); if (unlikely(ret)) - pr_err("Failed to submit get_feature command %d error: %d\n", - feature_id, ret); + netdev_err(ena_dev->net_device, + "Failed to submit get_feature command %d error: %d\n", + feature_id, ret); return ret; } @@ -1130,9 +1164,10 @@ static int ena_com_indirect_table_allocate(struct ena_com_dev *ena_dev, if ((get_resp.u.ind_table.min_size > log_size) || (get_resp.u.ind_table.max_size < log_size)) { - pr_err("Indirect table size doesn't fit. requested size: %d while min is:%d and max %d\n", - 1 << log_size, 1 << get_resp.u.ind_table.min_size, - 1 << get_resp.u.ind_table.max_size); + netdev_err(ena_dev->net_device, + "Indirect table size doesn't fit. requested size: %d while min is:%d and max %d\n", + 1 << log_size, 1 << get_resp.u.ind_table.min_size, + 1 << get_resp.u.ind_table.max_size); return -EINVAL; } @@ -1223,7 +1258,8 @@ static int ena_com_create_io_sq(struct ena_com_dev *ena_dev, &create_cmd.sq_ba, io_sq->desc_addr.phys_addr); if (unlikely(ret)) { - pr_err("Memory address set failed\n"); + netdev_err(ena_dev->net_device, + "Memory address set failed\n"); return ret; } } @@ -1234,7 +1270,8 @@ static int ena_com_create_io_sq(struct ena_com_dev *ena_dev, (struct ena_admin_acq_entry *)&cmd_completion, sizeof(cmd_completion)); if (unlikely(ret)) { - pr_err("Failed to create IO SQ. error: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to create IO SQ. error: %d\n", ret); return ret; } @@ -1252,7 +1289,8 @@ static int ena_com_create_io_sq(struct ena_com_dev *ena_dev, cmd_completion.llq_descriptors_offset); } - pr_debug("Created sq[%u], depth[%u]\n", io_sq->idx, io_sq->q_depth); + netdev_dbg(ena_dev->net_device, "Created sq[%u], depth[%u]\n", + io_sq->idx, io_sq->q_depth); return ret; } @@ -1286,7 +1324,8 @@ static void ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev, u16 prev_intr_delay_resolution = ena_dev->intr_delay_resolution; if (unlikely(!intr_delay_resolution)) { - pr_err("Illegal intr_delay_resolution provided. Going to use default 1 usec resolution\n"); + netdev_err(ena_dev->net_device, + "Illegal intr_delay_resolution provided. Going to use default 1 usec resolution\n"); intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION; } @@ -1322,11 +1361,13 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue, comp, comp_size); if (IS_ERR(comp_ctx)) { if (comp_ctx == ERR_PTR(-ENODEV)) - pr_debug("Failed to submit command [%ld]\n", - PTR_ERR(comp_ctx)); + netdev_dbg(admin_queue->ena_dev->net_device, + "Failed to submit command [%ld]\n", + PTR_ERR(comp_ctx)); else - pr_err("Failed to submit command [%ld]\n", - PTR_ERR(comp_ctx)); + netdev_err(admin_queue->ena_dev->net_device, + "Failed to submit command [%ld]\n", + PTR_ERR(comp_ctx)); return PTR_ERR(comp_ctx); } @@ -1334,9 +1375,11 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue, ret = ena_com_wait_and_process_admin_cq(comp_ctx, admin_queue); if (unlikely(ret)) { if (admin_queue->running_state) - pr_err("Failed to process command. ret = %d\n", ret); + netdev_err(admin_queue->ena_dev->net_device, + "Failed to process command. ret = %d\n", ret); else - pr_debug("Failed to process command. ret = %d\n", ret); + netdev_dbg(admin_queue->ena_dev->net_device, + "Failed to process command. ret = %d\n", ret); } return ret; } @@ -1365,7 +1408,7 @@ int ena_com_create_io_cq(struct ena_com_dev *ena_dev, &create_cmd.cq_ba, io_cq->cdesc_addr.phys_addr); if (unlikely(ret)) { - pr_err("Memory address set failed\n"); + netdev_err(ena_dev->net_device, "Memory address set failed\n"); return ret; } @@ -1375,7 +1418,8 @@ int ena_com_create_io_cq(struct ena_com_dev *ena_dev, (struct ena_admin_acq_entry *)&cmd_completion, sizeof(cmd_completion)); if (unlikely(ret)) { - pr_err("Failed to create IO CQ. error: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to create IO CQ. error: %d\n", ret); return ret; } @@ -1394,7 +1438,8 @@ int ena_com_create_io_cq(struct ena_com_dev *ena_dev, (u32 __iomem *)((uintptr_t)ena_dev->reg_bar + cmd_completion.numa_node_register_offset); - pr_debug("Created cq[%u], depth[%u]\n", io_cq->idx, io_cq->q_depth); + netdev_dbg(ena_dev->net_device, "Created cq[%u], depth[%u]\n", + io_cq->idx, io_cq->q_depth); return ret; } @@ -1404,8 +1449,9 @@ int ena_com_get_io_handlers(struct ena_com_dev *ena_dev, u16 qid, struct ena_com_io_cq **io_cq) { if (qid >= ENA_TOTAL_NUM_QUEUES) { - pr_err("Invalid queue number %d but the max is %d\n", qid, - ENA_TOTAL_NUM_QUEUES); + netdev_err(ena_dev->net_device, + "Invalid queue number %d but the max is %d\n", qid, + ENA_TOTAL_NUM_QUEUES); return -EINVAL; } @@ -1471,7 +1517,8 @@ int ena_com_destroy_io_cq(struct ena_com_dev *ena_dev, sizeof(destroy_resp)); if (unlikely(ret && (ret != -ENODEV))) - pr_err("Failed to destroy IO CQ. error: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to destroy IO CQ. error: %d\n", ret); return ret; } @@ -1513,13 +1560,14 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag) ret = ena_com_get_feature(ena_dev, &get_resp, ENA_ADMIN_AENQ_CONFIG, 0); if (ret) { - pr_info("Can't get aenq configuration\n"); + dev_info(ena_dev->dmadev, "Can't get aenq configuration\n"); return ret; } if ((get_resp.u.aenq.supported_groups & groups_flag) != groups_flag) { - pr_warn("Trying to set unsupported aenq events. supported flag: 0x%x asked flag: 0x%x\n", - get_resp.u.aenq.supported_groups, groups_flag); + netdev_warn(ena_dev->net_device, + "Trying to set unsupported aenq events. supported flag: 0x%x asked flag: 0x%x\n", + get_resp.u.aenq.supported_groups, groups_flag); return -EOPNOTSUPP; } @@ -1538,7 +1586,8 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag) sizeof(resp)); if (unlikely(ret)) - pr_err("Failed to config AENQ ret: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to config AENQ ret: %d\n", ret); return ret; } @@ -1549,17 +1598,18 @@ int ena_com_get_dma_width(struct ena_com_dev *ena_dev) int width; if (unlikely(caps == ENA_MMIO_READ_TIMEOUT)) { - pr_err("Reg read timeout occurred\n"); + netdev_err(ena_dev->net_device, "Reg read timeout occurred\n"); return -ETIME; } width = (caps & ENA_REGS_CAPS_DMA_ADDR_WIDTH_MASK) >> ENA_REGS_CAPS_DMA_ADDR_WIDTH_SHIFT; - pr_debug("ENA dma width: %d\n", width); + netdev_dbg(ena_dev->net_device, "ENA dma width: %d\n", width); if ((width < 32) || width > ENA_MAX_PHYS_ADDR_SIZE_BITS) { - pr_err("DMA width illegal value: %d\n", width); + netdev_err(ena_dev->net_device, "DMA width illegal value: %d\n", + width); return -EINVAL; } @@ -1583,23 +1633,24 @@ int ena_com_validate_version(struct ena_com_dev *ena_dev) if (unlikely((ver == ENA_MMIO_READ_TIMEOUT) || (ctrl_ver == ENA_MMIO_READ_TIMEOUT))) { - pr_err("Reg read timeout occurred\n"); + netdev_err(ena_dev->net_device, "Reg read timeout occurred\n"); return -ETIME; } - pr_info("ENA device version: %d.%d\n", - (ver & ENA_REGS_VERSION_MAJOR_VERSION_MASK) >> - ENA_REGS_VERSION_MAJOR_VERSION_SHIFT, - ver & ENA_REGS_VERSION_MINOR_VERSION_MASK); + dev_info(ena_dev->dmadev, "ENA device version: %d.%d\n", + (ver & ENA_REGS_VERSION_MAJOR_VERSION_MASK) >> + ENA_REGS_VERSION_MAJOR_VERSION_SHIFT, + ver & ENA_REGS_VERSION_MINOR_VERSION_MASK); - pr_info("ENA controller version: %d.%d.%d implementation version %d\n", - (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK) >> - ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT, - (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK) >> - ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_SHIFT, - (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_SUBMINOR_VERSION_MASK), - (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_IMPL_ID_MASK) >> - ENA_REGS_CONTROLLER_VERSION_IMPL_ID_SHIFT); + dev_info(ena_dev->dmadev, + "ENA controller version: %d.%d.%d implementation version %d\n", + (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK) >> + ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT, + (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK) >> + ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_SHIFT, + (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_SUBMINOR_VERSION_MASK), + (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_IMPL_ID_MASK) >> + ENA_REGS_CONTROLLER_VERSION_IMPL_ID_SHIFT); ctrl_ver_masked = (ctrl_ver & ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK) | @@ -1608,7 +1659,8 @@ int ena_com_validate_version(struct ena_com_dev *ena_dev) /* Validate the ctrl version without the implementation ID */ if (ctrl_ver_masked < MIN_ENA_CTRL_VER) { - pr_err("ENA ctrl version is lower than the minimal ctrl version the driver supports\n"); + netdev_err(ena_dev->net_device, + "ENA ctrl version is lower than the minimal ctrl version the driver supports\n"); return -1; } @@ -1741,12 +1793,13 @@ int ena_com_admin_init(struct ena_com_dev *ena_dev, dev_sts = ena_com_reg_bar_read32(ena_dev, ENA_REGS_DEV_STS_OFF); if (unlikely(dev_sts == ENA_MMIO_READ_TIMEOUT)) { - pr_err("Reg read timeout occurred\n"); + netdev_err(ena_dev->net_device, "Reg read timeout occurred\n"); return -ETIME; } if (!(dev_sts & ENA_REGS_DEV_STS_READY_MASK)) { - pr_err("Device isn't ready, abort com init\n"); + netdev_err(ena_dev->net_device, + "Device isn't ready, abort com init\n"); return -ENODEV; } @@ -1823,8 +1876,9 @@ int ena_com_create_io_queue(struct ena_com_dev *ena_dev, int ret; if (ctx->qid >= ENA_TOTAL_NUM_QUEUES) { - pr_err("Qid (%d) is bigger than max num of queues (%d)\n", - ctx->qid, ENA_TOTAL_NUM_QUEUES); + netdev_err(ena_dev->net_device, + "Qid (%d) is bigger than max num of queues (%d)\n", + ctx->qid, ENA_TOTAL_NUM_QUEUES); return -EINVAL; } @@ -1882,8 +1936,9 @@ void ena_com_destroy_io_queue(struct ena_com_dev *ena_dev, u16 qid) struct ena_com_io_cq *io_cq; if (qid >= ENA_TOTAL_NUM_QUEUES) { - pr_err("Qid (%d) is bigger than max num of queues (%d)\n", qid, - ENA_TOTAL_NUM_QUEUES); + netdev_err(ena_dev->net_device, + "Qid (%d) is bigger than max num of queues (%d)\n", + qid, ENA_TOTAL_NUM_QUEUES); return; } @@ -2035,8 +2090,9 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *ena_dev, void *data) timestamp = (u64)aenq_common->timestamp_low | ((u64)aenq_common->timestamp_high << 32); - pr_debug("AENQ! Group[%x] Syndrome[%x] timestamp: [%llus]\n", - aenq_common->group, aenq_common->syndrome, timestamp); + netdev_dbg(ena_dev->net_device, + "AENQ! Group[%x] Syndrome[%x] timestamp: [%llus]\n", + aenq_common->group, aenq_common->syndrome, timestamp); /* Handle specific event*/ handler_cb = ena_com_get_specific_aenq_cb(ena_dev, @@ -2079,19 +2135,20 @@ int ena_com_dev_reset(struct ena_com_dev *ena_dev, if (unlikely((stat == ENA_MMIO_READ_TIMEOUT) || (cap == ENA_MMIO_READ_TIMEOUT))) { - pr_err("Reg read32 timeout occurred\n"); + netdev_err(ena_dev->net_device, "Reg read32 timeout occurred\n"); return -ETIME; } if ((stat & ENA_REGS_DEV_STS_READY_MASK) == 0) { - pr_err("Device isn't ready, can't reset device\n"); + netdev_err(ena_dev->net_device, + "Device isn't ready, can't reset device\n"); return -EINVAL; } timeout = (cap & ENA_REGS_CAPS_RESET_TIMEOUT_MASK) >> ENA_REGS_CAPS_RESET_TIMEOUT_SHIFT; if (timeout == 0) { - pr_err("Invalid timeout value\n"); + netdev_err(ena_dev->net_device, "Invalid timeout value\n"); return -EINVAL; } @@ -2107,7 +2164,8 @@ int ena_com_dev_reset(struct ena_com_dev *ena_dev, rc = wait_for_reset_state(ena_dev, timeout, ENA_REGS_DEV_STS_RESET_IN_PROGRESS_MASK); if (rc != 0) { - pr_err("Reset indication didn't turn on\n"); + netdev_err(ena_dev->net_device, + "Reset indication didn't turn on\n"); return rc; } @@ -2115,7 +2173,8 @@ int ena_com_dev_reset(struct ena_com_dev *ena_dev, writel(0, ena_dev->reg_bar + ENA_REGS_DEV_CTL_OFF); rc = wait_for_reset_state(ena_dev, timeout, 0); if (rc != 0) { - pr_err("Reset indication didn't turn off\n"); + netdev_err(ena_dev->net_device, + "Reset indication didn't turn off\n"); return rc; } @@ -2152,7 +2211,8 @@ static int ena_get_dev_stats(struct ena_com_dev *ena_dev, sizeof(*get_resp)); if (unlikely(ret)) - pr_err("Failed to get stats. error: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to get stats. error: %d\n", ret); return ret; } @@ -2195,7 +2255,8 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu) int ret; if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_MTU)) { - pr_debug("Feature %d isn't supported\n", ENA_ADMIN_MTU); + netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n", + ENA_ADMIN_MTU); return -EOPNOTSUPP; } @@ -2214,7 +2275,8 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu) sizeof(resp)); if (unlikely(ret)) - pr_err("Failed to set mtu %d. error: %d\n", mtu, ret); + netdev_err(ena_dev->net_device, + "Failed to set mtu %d. error: %d\n", mtu, ret); return ret; } @@ -2228,7 +2290,8 @@ int ena_com_get_offload_settings(struct ena_com_dev *ena_dev, ret = ena_com_get_feature(ena_dev, &resp, ENA_ADMIN_STATELESS_OFFLOAD_CONFIG, 0); if (unlikely(ret)) { - pr_err("Failed to get offload capabilities %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to get offload capabilities %d\n", ret); return ret; } @@ -2248,8 +2311,8 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev) if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_RSS_HASH_FUNCTION)) { - pr_debug("Feature %d isn't supported\n", - ENA_ADMIN_RSS_HASH_FUNCTION); + netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n", + ENA_ADMIN_RSS_HASH_FUNCTION); return -EOPNOTSUPP; } @@ -2260,8 +2323,9 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev) return ret; if (!(get_resp.u.flow_hash_func.supported_func & BIT(rss->hash_func))) { - pr_err("Func hash %d isn't supported by device, abort\n", - rss->hash_func); + netdev_err(ena_dev->net_device, + "Func hash %d isn't supported by device, abort\n", + rss->hash_func); return -EOPNOTSUPP; } @@ -2278,7 +2342,7 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev) &cmd.control_buffer.address, rss->hash_key_dma_addr); if (unlikely(ret)) { - pr_err("Memory address set failed\n"); + netdev_err(ena_dev->net_device, "Memory address set failed\n"); return ret; } @@ -2290,8 +2354,9 @@ int ena_com_set_hash_function(struct ena_com_dev *ena_dev) (struct ena_admin_acq_entry *)&resp, sizeof(resp)); if (unlikely(ret)) { - pr_err("Failed to set hash function %d. error: %d\n", - rss->hash_func, ret); + netdev_err(ena_dev->net_device, + "Failed to set hash function %d. error: %d\n", + rss->hash_func, ret); return -EINVAL; } @@ -2322,7 +2387,8 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, return rc; if (!(BIT(func) & get_resp.u.flow_hash_func.supported_func)) { - pr_err("Flow hash function %d isn't supported\n", func); + netdev_err(ena_dev->net_device, + "Flow hash function %d isn't supported\n", func); return -EOPNOTSUPP; } @@ -2330,8 +2396,9 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, case ENA_ADMIN_TOEPLITZ: if (key) { if (key_len != sizeof(hash_key->key)) { - pr_err("key len (%hu) doesn't equal the supported size (%zu)\n", - key_len, sizeof(hash_key->key)); + netdev_err(ena_dev->net_device, + "key len (%hu) doesn't equal the supported size (%zu)\n", + key_len, sizeof(hash_key->key)); return -EINVAL; } memcpy(hash_key->key, key, key_len); @@ -2343,7 +2410,8 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, rss->hash_init_val = init_val; break; default: - pr_err("Invalid hash function (%d)\n", func); + netdev_err(ena_dev->net_device, "Invalid hash function (%d)\n", + func); return -EINVAL; } @@ -2429,8 +2497,8 @@ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev) if (!ena_com_check_supported_feature_id(ena_dev, ENA_ADMIN_RSS_HASH_INPUT)) { - pr_debug("Feature %d isn't supported\n", - ENA_ADMIN_RSS_HASH_INPUT); + netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n", + ENA_ADMIN_RSS_HASH_INPUT); return -EOPNOTSUPP; } @@ -2448,7 +2516,7 @@ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev) &cmd.control_buffer.address, rss->hash_ctrl_dma_addr); if (unlikely(ret)) { - pr_err("Memory address set failed\n"); + netdev_err(ena_dev->net_device, "Memory address set failed\n"); return ret; } cmd.control_buffer.length = sizeof(*hash_ctrl); @@ -2459,7 +2527,8 @@ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev) (struct ena_admin_acq_entry *)&resp, sizeof(resp)); if (unlikely(ret)) - pr_err("Failed to set hash input. error: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to set hash input. error: %d\n", ret); return ret; } @@ -2509,9 +2578,10 @@ int ena_com_set_default_hash_ctrl(struct ena_com_dev *ena_dev) available_fields = hash_ctrl->selected_fields[i].fields & hash_ctrl->supported_fields[i].fields; if (available_fields != hash_ctrl->selected_fields[i].fields) { - pr_err("Hash control doesn't support all the desire configuration. proto %x supported %x selected %x\n", - i, hash_ctrl->supported_fields[i].fields, - hash_ctrl->selected_fields[i].fields); + netdev_err(ena_dev->net_device, + "Hash control doesn't support all the desire configuration. proto %x supported %x selected %x\n", + i, hash_ctrl->supported_fields[i].fields, + hash_ctrl->selected_fields[i].fields); return -EOPNOTSUPP; } } @@ -2535,7 +2605,8 @@ int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev, int rc; if (proto >= ENA_ADMIN_RSS_PROTO_NUM) { - pr_err("Invalid proto num (%u)\n", proto); + netdev_err(ena_dev->net_device, "Invalid proto num (%u)\n", + proto); return -EINVAL; } @@ -2547,8 +2618,9 @@ int ena_com_fill_hash_ctrl(struct ena_com_dev *ena_dev, /* Make sure all the fields are supported */ supported_fields = hash_ctrl->supported_fields[proto].fields; if ((hash_fields & supported_fields) != hash_fields) { - pr_err("Proto %d doesn't support the required fields %x. supports only: %x\n", - proto, hash_fields, supported_fields); + netdev_err(ena_dev->net_device, + "Proto %d doesn't support the required fields %x. supports only: %x\n", + proto, hash_fields, supported_fields); } hash_ctrl->selected_fields[proto].fields = hash_fields; @@ -2588,14 +2660,15 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev) if (!ena_com_check_supported_feature_id( ena_dev, ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG)) { - pr_debug("Feature %d isn't supported\n", - ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG); + netdev_dbg(ena_dev->net_device, "Feature %d isn't supported\n", + ENA_ADMIN_RSS_INDIRECTION_TABLE_CONFIG); return -EOPNOTSUPP; } ret = ena_com_ind_tbl_convert_to_device(ena_dev); if (ret) { - pr_err("Failed to convert host indirection table to device table\n"); + netdev_err(ena_dev->net_device, + "Failed to convert host indirection table to device table\n"); return ret; } @@ -2612,7 +2685,7 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev) &cmd.control_buffer.address, rss->rss_ind_tbl_dma_addr); if (unlikely(ret)) { - pr_err("Memory address set failed\n"); + netdev_err(ena_dev->net_device, "Memory address set failed\n"); return ret; } @@ -2626,7 +2699,8 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev) sizeof(resp)); if (unlikely(ret)) - pr_err("Failed to set indirect table. error: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to set indirect table. error: %d\n", ret); return ret; } @@ -2782,7 +2856,7 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev) &cmd.u.host_attr.debug_ba, host_attr->debug_area_dma_addr); if (unlikely(ret)) { - pr_err("Memory address set failed\n"); + netdev_err(ena_dev->net_device, "Memory address set failed\n"); return ret; } @@ -2790,7 +2864,7 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev) &cmd.u.host_attr.os_info_ba, host_attr->host_info_dma_addr); if (unlikely(ret)) { - pr_err("Memory address set failed\n"); + netdev_err(ena_dev->net_device, "Memory address set failed\n"); return ret; } @@ -2803,7 +2877,8 @@ int ena_com_set_host_attributes(struct ena_com_dev *ena_dev) sizeof(resp)); if (unlikely(ret)) - pr_err("Failed to set host attributes: %d\n", ret); + netdev_err(ena_dev->net_device, + "Failed to set host attributes: %d\n", ret); return ret; } @@ -2815,12 +2890,14 @@ bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev) ENA_ADMIN_INTERRUPT_MODERATION); } -static int ena_com_update_nonadaptive_moderation_interval(u32 coalesce_usecs, +static int ena_com_update_nonadaptive_moderation_interval(struct ena_com_dev *ena_dev, + u32 coalesce_usecs, u32 intr_delay_resolution, u32 *intr_moder_interval) { if (!intr_delay_resolution) { - pr_err("Illegal interrupt delay granularity value\n"); + netdev_err(ena_dev->net_device, + "Illegal interrupt delay granularity value\n"); return -EFAULT; } @@ -2832,7 +2909,8 @@ static int ena_com_update_nonadaptive_moderation_interval(u32 coalesce_usecs, int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, u32 tx_coalesce_usecs) { - return ena_com_update_nonadaptive_moderation_interval(tx_coalesce_usecs, + return ena_com_update_nonadaptive_moderation_interval(ena_dev, + tx_coalesce_usecs, ena_dev->intr_delay_resolution, &ena_dev->intr_moder_tx_interval); } @@ -2840,7 +2918,8 @@ int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_de int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, u32 rx_coalesce_usecs) { - return ena_com_update_nonadaptive_moderation_interval(rx_coalesce_usecs, + return ena_com_update_nonadaptive_moderation_interval(ena_dev, + rx_coalesce_usecs, ena_dev->intr_delay_resolution, &ena_dev->intr_moder_rx_interval); } @@ -2856,12 +2935,14 @@ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev) if (rc) { if (rc == -EOPNOTSUPP) { - pr_debug("Feature %d isn't supported\n", - ENA_ADMIN_INTERRUPT_MODERATION); + netdev_dbg(ena_dev->net_device, + "Feature %d isn't supported\n", + ENA_ADMIN_INTERRUPT_MODERATION); rc = 0; } else { - pr_err("Failed to get interrupt moderation admin cmd. rc: %d\n", - rc); + netdev_err(ena_dev->net_device, + "Failed to get interrupt moderation admin cmd. rc: %d\n", + rc); } /* no moderation supported, disable adaptive support */ @@ -2909,7 +2990,8 @@ int ena_com_config_dev_mode(struct ena_com_dev *ena_dev, (llq_info->descs_num_before_header * sizeof(struct ena_eth_io_tx_desc)); if (unlikely(ena_dev->tx_max_header_size == 0)) { - pr_err("The size of the LLQ entry is smaller than needed\n"); + netdev_err(ena_dev->net_device, + "The size of the LLQ entry is smaller than needed\n"); return -EINVAL; } diff --git a/drivers/net/ethernet/amazon/ena/ena_com.h b/drivers/net/ethernet/amazon/ena/ena_com.h index 55097750d062..b0f76fb3b1d7 100644 --- a/drivers/net/ethernet/amazon/ena/ena_com.h +++ b/drivers/net/ethernet/amazon/ena/ena_com.h @@ -303,6 +303,7 @@ struct ena_com_dev { u8 __iomem *reg_bar; void __iomem *mem_bar; void *dmadev; + struct net_device *net_device; enum ena_admin_placement_policy_type tx_mem_queue_type; u32 tx_max_header_size; @@ -931,6 +932,26 @@ int ena_com_config_dev_mode(struct ena_com_dev *ena_dev, struct ena_admin_feature_llq_desc *llq_features, struct ena_llq_configurations *llq_default_config); +/* ena_com_io_sq_to_ena_dev - Extract ena_com_dev using contained field io_sq. + * @io_sq: IO submit queue struct + * + * @return - ena_com_dev struct extracted from io_sq + */ +static inline struct ena_com_dev *ena_com_io_sq_to_ena_dev(struct ena_com_io_sq *io_sq) +{ + return container_of(io_sq, struct ena_com_dev, io_sq_queues[io_sq->qid]); +} + +/* ena_com_io_cq_to_ena_dev - Extract ena_com_dev using contained field io_cq. + * @io_sq: IO submit queue struct + * + * @return - ena_com_dev struct extracted from io_sq + */ +static inline struct ena_com_dev *ena_com_io_cq_to_ena_dev(struct ena_com_io_cq *io_cq) +{ + return container_of(io_cq, struct ena_com_dev, io_cq_queues[io_cq->qid]); +} + static inline bool ena_com_get_adaptive_moderation_enabled(struct ena_com_dev *ena_dev) { return ena_dev->adaptive_coalescing; diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.c b/drivers/net/ethernet/amazon/ena/ena_eth_com.c index 032ab9f20438..85daeac219ec 100644 --- a/drivers/net/ethernet/amazon/ena/ena_eth_com.c +++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.c @@ -58,13 +58,15 @@ static int ena_com_write_bounce_buffer_to_dev(struct ena_com_io_sq *io_sq, if (is_llq_max_tx_burst_exists(io_sq)) { if (unlikely(!io_sq->entries_in_tx_burst_left)) { - pr_err("Error: trying to send more packets than tx burst allows\n"); + netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Error: trying to send more packets than tx burst allows\n"); return -ENOSPC; } io_sq->entries_in_tx_burst_left--; - pr_debug("Decreasing entries_in_tx_burst_left of queue %d to %d\n", - io_sq->qid, io_sq->entries_in_tx_burst_left); + netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Decreasing entries_in_tx_burst_left of queue %d to %d\n", + io_sq->qid, io_sq->entries_in_tx_burst_left); } /* Make sure everything was written into the bounce buffer before @@ -102,12 +104,14 @@ static int ena_com_write_header_to_bounce(struct ena_com_io_sq *io_sq, if (unlikely((header_offset + header_len) > llq_info->desc_list_entry_size)) { - pr_err("Trying to write header larger than llq entry can accommodate\n"); + netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Trying to write header larger than llq entry can accommodate\n"); return -EFAULT; } if (unlikely(!bounce_buffer)) { - pr_err("Bounce buffer is NULL\n"); + netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Bounce buffer is NULL\n"); return -EFAULT; } @@ -125,7 +129,8 @@ static void *get_sq_desc_llq(struct ena_com_io_sq *io_sq) bounce_buffer = pkt_ctrl->curr_bounce_buf; if (unlikely(!bounce_buffer)) { - pr_err("Bounce buffer is NULL\n"); + netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Bounce buffer is NULL\n"); return NULL; } @@ -250,8 +255,9 @@ static u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq, io_cq->cur_rx_pkt_cdesc_count = 0; io_cq->cur_rx_pkt_cdesc_start_idx = head_masked; - pr_debug("ENA q_id: %d packets were completed. first desc idx %u descs# %d\n", - io_cq->qid, *first_cdesc_idx, count); + netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device, + "ENA q_id: %d packets were completed. first desc idx %u descs# %d\n", + io_cq->qid, *first_cdesc_idx, count); } else { io_cq->cur_rx_pkt_cdesc_count += count; count = 0; @@ -335,7 +341,8 @@ static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq, return 0; } -static void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx, +static void ena_com_rx_set_flags(struct ena_com_io_cq *io_cq, + struct ena_com_rx_ctx *ena_rx_ctx, struct ena_eth_io_rx_cdesc_base *cdesc) { ena_rx_ctx->l3_proto = cdesc->status & @@ -357,10 +364,11 @@ static void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx, (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK) >> ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT; - pr_debug("l3_proto %d l4_proto %d l3_csum_err %d l4_csum_err %d hash %d frag %d cdesc_status %x\n", - ena_rx_ctx->l3_proto, ena_rx_ctx->l4_proto, - ena_rx_ctx->l3_csum_err, ena_rx_ctx->l4_csum_err, - ena_rx_ctx->hash, ena_rx_ctx->frag, cdesc->status); + netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device, + "l3_proto %d l4_proto %d l3_csum_err %d l4_csum_err %d hash %d frag %d cdesc_status %x\n", + ena_rx_ctx->l3_proto, ena_rx_ctx->l4_proto, + ena_rx_ctx->l3_csum_err, ena_rx_ctx->l4_csum_err, + ena_rx_ctx->hash, ena_rx_ctx->frag, cdesc->status); } /*****************************************************************************/ @@ -385,13 +393,15 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, /* num_bufs +1 for potential meta desc */ if (unlikely(!ena_com_sq_have_enough_space(io_sq, num_bufs + 1))) { - pr_debug("Not enough space in the tx queue\n"); + netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Not enough space in the tx queue\n"); return -ENOMEM; } if (unlikely(header_len > io_sq->tx_max_header_size)) { - pr_err("Header size is too large %d max header: %d\n", - header_len, io_sq->tx_max_header_size); + netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Header size is too large %d max header: %d\n", + header_len, io_sq->tx_max_header_size); return -EINVAL; } @@ -405,7 +415,8 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, rc = ena_com_create_and_store_tx_meta_desc(io_sq, ena_tx_ctx, &have_meta); if (unlikely(rc)) { - pr_err("Failed to create and store tx meta desc\n"); + netdev_err(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Failed to create and store tx meta desc\n"); return rc; } @@ -529,12 +540,14 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq, return 0; } - pr_debug("Fetch rx packet: queue %d completed desc: %d\n", io_cq->qid, - nb_hw_desc); + netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device, + "Fetch rx packet: queue %d completed desc: %d\n", io_cq->qid, + nb_hw_desc); if (unlikely(nb_hw_desc > ena_rx_ctx->max_bufs)) { - pr_err("Too many RX cdescs (%d) > MAX(%d)\n", nb_hw_desc, - ena_rx_ctx->max_bufs); + netdev_err(ena_com_io_cq_to_ena_dev(io_cq)->net_device, + "Too many RX cdescs (%d) > MAX(%d)\n", nb_hw_desc, + ena_rx_ctx->max_bufs); return -ENOSPC; } @@ -557,11 +570,12 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq, /* Update SQ head ptr */ io_sq->next_to_comp += nb_hw_desc; - pr_debug("[%s][QID#%d] Updating SQ head to: %d\n", __func__, io_sq->qid, - io_sq->next_to_comp); + netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device, + "[%s][QID#%d] Updating SQ head to: %d\n", __func__, + io_sq->qid, io_sq->next_to_comp); /* Get rx flags from the last pkt */ - ena_com_rx_set_flags(ena_rx_ctx, cdesc); + ena_com_rx_set_flags(io_cq, ena_rx_ctx, cdesc); ena_rx_ctx->descs = nb_hw_desc; return 0; @@ -593,6 +607,10 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq, desc->req_id = req_id; + netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "[%s] Adding single RX desc, Queue: %u, req_id: %u\n", + __func__, io_sq->qid, req_id); + desc->buff_addr_lo = (u32)ena_buf->paddr; desc->buff_addr_hi = ((ena_buf->paddr & GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32); diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.h b/drivers/net/ethernet/amazon/ena/ena_eth_com.h index 2c16c218818a..689313ee25a8 100644 --- a/drivers/net/ethernet/amazon/ena/ena_eth_com.h +++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.h @@ -140,8 +140,9 @@ static inline bool ena_com_is_doorbell_needed(struct ena_com_io_sq *io_sq, llq_info->descs_per_entry); } - pr_debug("Queue: %d num_descs: %d num_entries_needed: %d\n", io_sq->qid, - num_descs, num_entries_needed); + netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Queue: %d num_descs: %d num_entries_needed: %d\n", + io_sq->qid, num_descs, num_entries_needed); return num_entries_needed > io_sq->entries_in_tx_burst_left; } @@ -151,14 +152,16 @@ static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq) u16 max_entries_in_tx_burst = io_sq->llq_info.max_entries_in_tx_burst; u16 tail = io_sq->tail; - pr_debug("Write submission queue doorbell for queue: %d tail: %d\n", - io_sq->qid, tail); + netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Write submission queue doorbell for queue: %d tail: %d\n", + io_sq->qid, tail); writel(tail, io_sq->db_addr); if (is_llq_max_tx_burst_exists(io_sq)) { - pr_debug("Reset available entries in tx burst for queue %d to %d\n", - io_sq->qid, max_entries_in_tx_burst); + netdev_dbg(ena_com_io_sq_to_ena_dev(io_sq)->net_device, + "Reset available entries in tx burst for queue %d to %d\n", + io_sq->qid, max_entries_in_tx_burst); io_sq->entries_in_tx_burst_left = max_entries_in_tx_burst; } @@ -176,8 +179,9 @@ static inline int ena_com_update_dev_comp_head(struct ena_com_io_cq *io_cq) need_update = unreported_comp > (io_cq->q_depth / ENA_COMP_HEAD_THRESH); if (unlikely(need_update)) { - pr_debug("Write completion queue doorbell for queue %d: head: %d\n", - io_cq->qid, head); + netdev_dbg(ena_com_io_cq_to_ena_dev(io_cq)->net_device, + "Write completion queue doorbell for queue %d: head: %d\n", + io_cq->qid, head); writel(head, io_cq->cq_head_db_reg); io_cq->last_head_update = head; } @@ -240,7 +244,8 @@ static inline int ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq, *req_id = READ_ONCE(cdesc->req_id); if (unlikely(*req_id >= io_cq->q_depth)) { - pr_err("Invalid req id %d\n", cdesc->req_id); + netdev_err(ena_com_io_cq_to_ena_dev(io_cq)->net_device, + "Invalid req id %d\n", cdesc->req_id); return -EINVAL; } diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 985dea1870b5..371593ed0400 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -4192,6 +4192,8 @@ static int ena_probe(struct pci_dev *pdev, const struct pci_device_id *ent) adapter->pdev = pdev; adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); + ena_dev->net_device = netdev; + pci_set_drvdata(pdev, adapter); rc = ena_device_init(ena_dev, pdev, &get_feat_ctx, &wd_state); From patchwork Thu Nov 26 21:20:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kiyanovski, Arthur" X-Patchwork-Id: 11934785 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A131C64E75 for ; Thu, 26 Nov 2020 21:20:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0BE7621D93 for ; Thu, 26 Nov 2020 21:20:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="vZOKn1HC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387652AbgKZVUo (ORCPT ); Thu, 26 Nov 2020 16:20:44 -0500 Received: from smtp-fw-9101.amazon.com ([207.171.184.25]:41062 "EHLO smtp-fw-9101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726178AbgKZVUo (ORCPT ); Thu, 26 Nov 2020 16:20:44 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1606425642; x=1637961642; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=OT2Fetwz5YeBn0JhFixPRL5++J1V2T4nghyiPlaLAb8=; b=vZOKn1HCjpL/zyzq2uy80bPNzCzxpOU5t0TLAZZAHWy6KJmu6MAW5EqI b/EmT/kVbulBjctDfgeLx8NGWd1eGukcgpNCwzskXK/HAifud+nHmku/j KADM7a57vEw22tlK4OM7QubW59kgq1FwVPI9X9ds9ULjD1l93QGaI2czo Y=; X-IronPort-AV: E=Sophos;i="5.78,373,1599523200"; d="scan'208";a="91291299" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1e-57e1d233.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 26 Nov 2020 21:20:36 +0000 Received: from EX13MTAUEE001.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38]) by email-inbound-relay-1e-57e1d233.us-east-1.amazon.com (Postfix) with ESMTPS id AA470141606; Thu, 26 Nov 2020 21:20:36 +0000 (UTC) Received: from EX13D08UEE004.ant.amazon.com (10.43.62.182) by EX13MTAUEE001.ant.amazon.com (10.43.62.226) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:35 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D08UEE004.ant.amazon.com (10.43.62.182) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:35 +0000 Received: from HFA15-G63729NC.amazon.com (10.1.213.20) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Nov 2020 21:20:31 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , , , , Ido Segev , Igor Chauskin Subject: [RFC PATCH V2 net-next 3/9] net: ena: add explicit casting to variables Date: Thu, 26 Nov 2020 23:20:11 +0200 Message-ID: <1606425617-13112-4-git-send-email-akiyano@amazon.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606425617-13112-1-git-send-email-akiyano@amazon.com> References: <1606425617-13112-1-git-send-email-akiyano@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Arthur Kiyanovski This patch adds explicit casting to some implicit conversions in the ena driver. The implicit conversions fail some of our static checkers that search for accidental conversions in our driver. Adding this cast won't affect the end results, and would sooth the checkers. Signed-off-by: Ido Segev Signed-off-by: Igor Chauskin Signed-off-by: Shay Agroskin Signed-off-by: Arthur Kiyanovski --- drivers/net/ethernet/amazon/ena/ena_com.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c index e168edf3c930..7910d8e68a99 100644 --- a/drivers/net/ethernet/amazon/ena/ena_com.c +++ b/drivers/net/ethernet/amazon/ena/ena_com.c @@ -1369,7 +1369,7 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue, "Failed to submit command [%ld]\n", PTR_ERR(comp_ctx)); - return PTR_ERR(comp_ctx); + return (int)PTR_ERR(comp_ctx); } ret = ena_com_wait_and_process_admin_cq(comp_ctx, admin_queue); @@ -1595,7 +1595,7 @@ int ena_com_set_aenq_config(struct ena_com_dev *ena_dev, u32 groups_flag) int ena_com_get_dma_width(struct ena_com_dev *ena_dev) { u32 caps = ena_com_reg_bar_read32(ena_dev, ENA_REGS_CAPS_OFF); - int width; + u32 width; if (unlikely(caps == ENA_MMIO_READ_TIMEOUT)) { netdev_err(ena_dev->net_device, "Reg read timeout occurred\n"); @@ -2266,7 +2266,7 @@ int ena_com_set_dev_mtu(struct ena_com_dev *ena_dev, int mtu) cmd.aq_common_descriptor.opcode = ENA_ADMIN_SET_FEATURE; cmd.aq_common_descriptor.flags = 0; cmd.feat_common.feature_id = ENA_ADMIN_MTU; - cmd.u.mtu.mtu = mtu; + cmd.u.mtu.mtu = (u32)mtu; ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, @@ -2689,7 +2689,7 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev) return ret; } - cmd.control_buffer.length = (1ULL << rss->tbl_log_size) * + cmd.control_buffer.length = (u32)(1ULL << rss->tbl_log_size) * sizeof(struct ena_admin_rss_ind_table_entry); ret = ena_com_execute_admin_command(admin_queue, @@ -2712,7 +2712,7 @@ int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl) u32 tbl_size; int i, rc; - tbl_size = (1ULL << rss->tbl_log_size) * + tbl_size = (u32)(1ULL << rss->tbl_log_size) * sizeof(struct ena_admin_rss_ind_table_entry); rc = ena_com_get_feature_ex(ena_dev, &get_resp, From patchwork Thu Nov 26 21:20:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kiyanovski, Arthur" X-Patchwork-Id: 11934787 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0EC8C64E7A for ; Thu, 26 Nov 2020 21:20:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5484021D1A for ; Thu, 26 Nov 2020 21:20:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="TX+tm8Xi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387765AbgKZVUq (ORCPT ); Thu, 26 Nov 2020 16:20:46 -0500 Received: from smtp-fw-9101.amazon.com ([207.171.184.25]:41062 "EHLO smtp-fw-9101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726178AbgKZVUq (ORCPT ); Thu, 26 Nov 2020 16:20:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1606425645; x=1637961645; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=VNTxgNlhmdci3718NwQdTNBSex6doRGzHzqUc8GkY+U=; b=TX+tm8XiBncV3qifkx6sfcn8h7+OpCO3XXjwRuUBZ+lagkfbBjZAHgSH PFIJZzo8rB4JAWKLy9aG+Xp1G02L4pQbHwMZ0vnjmxu/PC9ISOHk546gJ 6vZkNBQ82w/8v2hyQ58rCq6Sneqkl1cfhN5D/sTmglVtCpaYyymkIdAMb 4=; X-IronPort-AV: E=Sophos;i="5.78,373,1599523200"; d="scan'208";a="91291304" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 26 Nov 2020 21:20:44 +0000 Received: from EX13MTAUEE001.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34]) by email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com (Postfix) with ESMTPS id 1CD3C282380; Thu, 26 Nov 2020 21:20:44 +0000 (UTC) Received: from EX13D08UEE004.ant.amazon.com (10.43.62.182) by EX13MTAUEE001.ant.amazon.com (10.43.62.226) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:38 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D08UEE004.ant.amazon.com (10.43.62.182) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:39 +0000 Received: from HFA15-G63729NC.amazon.com (10.1.213.20) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Nov 2020 21:20:35 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , , , , Kuniyuki Iwashima Subject: [RFC PATCH V2 net-next 4/9] net: ena: fix coding style nits Date: Thu, 26 Nov 2020 23:20:12 +0200 Message-ID: <1606425617-13112-5-git-send-email-akiyano@amazon.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606425617-13112-1-git-send-email-akiyano@amazon.com> References: <1606425617-13112-1-git-send-email-akiyano@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Arthur Kiyanovski This commit fixes two nits, but it does not generate any change to binary because of the optimization of gcc. - use `count` instead of `channels->combined_count` - change return type from `int` to `bool` Also add spaces and change macro order in OR assignment to make the code easier to read. Signed-off-by: Sameeh Jubran Signed-off-by: Kuniyuki Iwashima Signed-off-by: Shay Agroskin Signed-off-by: Arthur Kiyanovski --- drivers/net/ethernet/amazon/ena/ena_eth_com.c | 5 +++-- drivers/net/ethernet/amazon/ena/ena_ethtool.c | 2 +- drivers/net/ethernet/amazon/ena/ena_netdev.h | 4 ++-- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.c b/drivers/net/ethernet/amazon/ena/ena_eth_com.c index 85daeac219ec..c3be751e7379 100644 --- a/drivers/net/ethernet/amazon/ena/ena_eth_com.c +++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.c @@ -578,6 +578,7 @@ int ena_com_rx_pkt(struct ena_com_io_cq *io_cq, ena_com_rx_set_flags(io_cq, ena_rx_ctx, cdesc); ena_rx_ctx->descs = nb_hw_desc; + return 0; } @@ -602,8 +603,8 @@ int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq, desc->ctrl = ENA_ETH_IO_RX_DESC_FIRST_MASK | ENA_ETH_IO_RX_DESC_LAST_MASK | - (io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK) | - ENA_ETH_IO_RX_DESC_COMP_REQ_MASK; + ENA_ETH_IO_RX_DESC_COMP_REQ_MASK | + (io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK); desc->req_id = req_id; diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c index 6cdd9efe8df3..2ad44ae74cf6 100644 --- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c +++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c @@ -839,7 +839,7 @@ static int ena_set_channels(struct net_device *netdev, /* The check for max value is already done in ethtool */ if (count < ENA_MIN_NUM_IO_QUEUES || (ena_xdp_present(adapter) && - !ena_xdp_legal_queue_count(adapter, channels->combined_count))) + !ena_xdp_legal_queue_count(adapter, count))) return -EINVAL; return ena_update_queue_count(adapter, count); diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index 30eb686749dc..c39f41711c31 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -433,8 +433,8 @@ static inline bool ena_xdp_present_ring(struct ena_ring *ring) return !!ring->xdp_bpf_prog; } -static inline int ena_xdp_legal_queue_count(struct ena_adapter *adapter, - u32 queues) +static inline bool ena_xdp_legal_queue_count(struct ena_adapter *adapter, + u32 queues) { return 2 * queues <= adapter->max_num_io_queues; } From patchwork Thu Nov 26 21:20:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kiyanovski, Arthur" X-Patchwork-Id: 11934793 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15720C63798 for ; Thu, 26 Nov 2020 21:21:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFC4D21D1A for ; Thu, 26 Nov 2020 21:21:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="qU5Wi1Qn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387778AbgKZVUv (ORCPT ); Thu, 26 Nov 2020 16:20:51 -0500 Received: from smtp-fw-2101.amazon.com ([72.21.196.25]:52491 "EHLO smtp-fw-2101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726178AbgKZVUv (ORCPT ); Thu, 26 Nov 2020 16:20:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1606425651; x=1637961651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=C3JrDT9xS0rE0BUzca9wKEp7kDGK/qpBwfS4rcvUS/Y=; b=qU5Wi1QnNIXUDewsHcdGFmlADj/CQs3vp9iFiPh6ndUsJYLIhG42aEeW V0ZCqXZABeBh4aI+Q5GM7/Hr4eSMFYIXCLd3SKZi5gapjw7D6YtdYY1Ec OfxEPlQ2YoOb8CDbr6Y/npHyV1dhv7KMwUM6LDl16yM/w2nwoyPOqr/uS 4=; X-IronPort-AV: E=Sophos;i="5.78,373,1599523200"; d="scan'208";a="66001625" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-1e-42f764a0.us-east-1.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 26 Nov 2020 21:20:44 +0000 Received: from EX13MTAUEE001.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34]) by email-inbound-relay-1e-42f764a0.us-east-1.amazon.com (Postfix) with ESMTPS id 72E4BA71AB; Thu, 26 Nov 2020 21:20:43 +0000 (UTC) Received: from EX13D08UEB001.ant.amazon.com (10.43.60.245) by EX13MTAUEE001.ant.amazon.com (10.43.62.200) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:42 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D08UEB001.ant.amazon.com (10.43.60.245) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:42 +0000 Received: from HFA15-G63729NC.amazon.com (10.1.213.20) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Nov 2020 21:20:39 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , , , Subject: [RFC PATCH V2 net-next 5/9] net: ena: aggregate stats increase into a function Date: Thu, 26 Nov 2020 23:20:13 +0200 Message-ID: <1606425617-13112-6-git-send-email-akiyano@amazon.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606425617-13112-1-git-send-email-akiyano@amazon.com> References: <1606425617-13112-1-git-send-email-akiyano@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Arthur Kiyanovski Introduce ena_increase_stat() function to increase statistics by a certain number. The function includes the - lock aquire (on 32bit machines) - stat increase - lock release (on 32bit machines) line sequence that is ubiquitous across the driver. The function increases a single stat at a time and several stats which are increased together weren't put into a function to avoid calling the function several times for each stat which looks bad and might decrease performance. Signed-off-by: Shay Agroskin Signed-off-by: Arthur Kiyanovski --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 167 ++++++++----------- 1 file changed, 68 insertions(+), 99 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 371593ed0400..9c3e0e3e33b5 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -80,6 +80,15 @@ static void ena_unmap_tx_buff(struct ena_ring *tx_ring, static int ena_create_io_tx_queues_in_range(struct ena_adapter *adapter, int first_index, int count); +/* Increase a stat by cnt while holding syncp seqlock on 32bit machines */ +static void ena_increase_stat(u64 *statp, u64 cnt, + struct u64_stats_sync *syncp) +{ + u64_stats_update_begin(syncp); + (*statp) += cnt; + u64_stats_update_end(syncp); +} + static void ena_tx_timeout(struct net_device *dev, unsigned int txqueue) { struct ena_adapter *adapter = netdev_priv(dev); @@ -92,9 +101,7 @@ static void ena_tx_timeout(struct net_device *dev, unsigned int txqueue) return; adapter->reset_reason = ENA_REGS_RESET_OS_NETDEV_WD; - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.tx_timeout++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.tx_timeout, 1, &adapter->syncp); netif_err(adapter, tx_err, dev, "Transmit time out\n"); } @@ -154,9 +161,8 @@ static int ena_xmit_common(struct net_device *dev, if (unlikely(rc)) { netif_err(adapter, tx_queued, dev, "Failed to prepare tx bufs\n"); - u64_stats_update_begin(&ring->syncp); - ring->tx_stats.prepare_ctx_err++; - u64_stats_update_end(&ring->syncp); + ena_increase_stat(&ring->tx_stats.prepare_ctx_err, 1, + &ring->syncp); if (rc != -ENOMEM) { adapter->reset_reason = ENA_REGS_RESET_DRIVER_INVALID_STATE; @@ -264,9 +270,8 @@ static int ena_xdp_tx_map_buff(struct ena_ring *xdp_ring, return 0; error_report_dma_error: - u64_stats_update_begin(&xdp_ring->syncp); - xdp_ring->tx_stats.dma_mapping_err++; - u64_stats_update_end(&xdp_ring->syncp); + ena_increase_stat(&xdp_ring->tx_stats.dma_mapping_err, 1, + &xdp_ring->syncp); netif_warn(adapter, tx_queued, adapter->netdev, "Failed to map xdp buff\n"); xdp_return_frame_rx_napi(tx_info->xdpf); @@ -320,9 +325,7 @@ static int ena_xdp_xmit_buff(struct net_device *dev, * has a mb */ ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq); - u64_stats_update_begin(&xdp_ring->syncp); - xdp_ring->tx_stats.doorbells++; - u64_stats_update_end(&xdp_ring->syncp); + ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, &xdp_ring->syncp); return NETDEV_TX_OK; @@ -369,9 +372,7 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, xdp_stat = &rx_ring->rx_stats.xdp_invalid; } - u64_stats_update_begin(&rx_ring->syncp); - (*xdp_stat)++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(xdp_stat, 1, &rx_ring->syncp); out: rcu_read_unlock(); @@ -924,9 +925,8 @@ static int ena_alloc_rx_page(struct ena_ring *rx_ring, page = alloc_page(gfp); if (unlikely(!page)) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.page_alloc_fail++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.page_alloc_fail, 1, + &rx_ring->syncp); return -ENOMEM; } @@ -936,9 +936,8 @@ static int ena_alloc_rx_page(struct ena_ring *rx_ring, dma = dma_map_page(rx_ring->dev, page, 0, ENA_PAGE_SIZE, DMA_BIDIRECTIONAL); if (unlikely(dma_mapping_error(rx_ring->dev, dma))) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.dma_mapping_err++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.dma_mapping_err, 1, + &rx_ring->syncp); __free_page(page); return -EIO; @@ -1011,9 +1010,8 @@ static int ena_refill_rx_bufs(struct ena_ring *rx_ring, u32 num) } if (unlikely(i < num)) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.refil_partial++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.refil_partial, 1, + &rx_ring->syncp); netif_warn(rx_ring->adapter, rx_err, rx_ring->netdev, "Refilled rx qid %d with only %d buffers (from %d)\n", rx_ring->qid, i, num); @@ -1189,9 +1187,7 @@ static int handle_invalid_req_id(struct ena_ring *ring, u16 req_id, "Invalid req_id: %hu\n", req_id); - u64_stats_update_begin(&ring->syncp); - ring->tx_stats.bad_req_id++; - u64_stats_update_end(&ring->syncp); + ena_increase_stat(&ring->tx_stats.bad_req_id, 1, &ring->syncp); /* Trigger device reset */ ring->adapter->reset_reason = ENA_REGS_RESET_INV_TX_REQ_ID; @@ -1302,9 +1298,8 @@ static int ena_clean_tx_irq(struct ena_ring *tx_ring, u32 budget) if (netif_tx_queue_stopped(txq) && above_thresh && test_bit(ENA_FLAG_DEV_UP, &tx_ring->adapter->flags)) { netif_tx_wake_queue(txq); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.queue_wakeup++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.queue_wakeup, 1, + &tx_ring->syncp); } __netif_tx_unlock(txq); } @@ -1323,9 +1318,8 @@ static struct sk_buff *ena_alloc_skb(struct ena_ring *rx_ring, bool frags) rx_ring->rx_copybreak); if (unlikely(!skb)) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.skb_alloc_fail++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.skb_alloc_fail, 1, + &rx_ring->syncp); netif_dbg(rx_ring->adapter, rx_err, rx_ring->netdev, "Failed to allocate skb. frags: %d\n", frags); return NULL; @@ -1453,9 +1447,8 @@ static void ena_rx_checksum(struct ena_ring *rx_ring, (ena_rx_ctx->l3_csum_err))) { /* ipv4 checksum error */ skb->ip_summed = CHECKSUM_NONE; - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.bad_csum++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.bad_csum, 1, + &rx_ring->syncp); netif_dbg(rx_ring->adapter, rx_err, rx_ring->netdev, "RX IPv4 header checksum error\n"); return; @@ -1466,9 +1459,8 @@ static void ena_rx_checksum(struct ena_ring *rx_ring, (ena_rx_ctx->l4_proto == ENA_ETH_IO_L4_PROTO_UDP))) { if (unlikely(ena_rx_ctx->l4_csum_err)) { /* TCP/UDP checksum error */ - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.bad_csum++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.bad_csum, 1, + &rx_ring->syncp); netif_dbg(rx_ring->adapter, rx_err, rx_ring->netdev, "RX L4 checksum error\n"); skb->ip_summed = CHECKSUM_NONE; @@ -1477,13 +1469,11 @@ static void ena_rx_checksum(struct ena_ring *rx_ring, if (likely(ena_rx_ctx->l4_csum_checked)) { skb->ip_summed = CHECKSUM_UNNECESSARY; - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.csum_good++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.csum_good, 1, + &rx_ring->syncp); } else { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.csum_unchecked++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.csum_unchecked, 1, + &rx_ring->syncp); skb->ip_summed = CHECKSUM_NONE; } } else { @@ -1675,14 +1665,12 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, adapter = netdev_priv(rx_ring->netdev); if (rc == -ENOSPC) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.bad_desc_num++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.bad_desc_num, 1, + &rx_ring->syncp); adapter->reset_reason = ENA_REGS_RESET_TOO_MANY_RX_DESCS; } else { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.bad_req_id++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.bad_req_id, 1, + &rx_ring->syncp); adapter->reset_reason = ENA_REGS_RESET_INV_RX_REQ_ID; } @@ -1743,9 +1731,8 @@ static void ena_unmask_interrupt(struct ena_ring *tx_ring, tx_ring->smoothed_interval, true); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.unmask_interrupt++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.unmask_interrupt, 1, + &tx_ring->syncp); /* It is a shared MSI-X. * Tx and Rx CQ have pointer to it. @@ -2552,9 +2539,8 @@ static int ena_up(struct ena_adapter *adapter) if (test_bit(ENA_FLAG_LINK_UP, &adapter->flags)) netif_carrier_on(adapter->netdev); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.interface_up++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.interface_up, 1, + &adapter->syncp); set_bit(ENA_FLAG_DEV_UP, &adapter->flags); @@ -2592,9 +2578,8 @@ static void ena_down(struct ena_adapter *adapter) clear_bit(ENA_FLAG_DEV_UP, &adapter->flags); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.interface_down++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.interface_down, 1, + &adapter->syncp); netif_carrier_off(adapter->netdev); netif_tx_disable(adapter->netdev); @@ -2822,15 +2807,12 @@ static int ena_check_and_linearize_skb(struct ena_ring *tx_ring, (header_len < tx_ring->tx_max_header_size)) return 0; - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.linearize++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.linearize, 1, &tx_ring->syncp); rc = skb_linearize(skb); if (unlikely(rc)) { - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.linearize_failed++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.linearize_failed, 1, + &tx_ring->syncp); } return rc; @@ -2870,9 +2852,8 @@ static int ena_tx_map_skb(struct ena_ring *tx_ring, tx_ring->push_buf_intermediate_buf); *header_len = push_len; if (unlikely(skb->data != *push_hdr)) { - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.llq_buffer_copy++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.llq_buffer_copy, 1, + &tx_ring->syncp); delta = push_len - skb_head_len; } @@ -2929,9 +2910,8 @@ static int ena_tx_map_skb(struct ena_ring *tx_ring, return 0; error_report_dma_error: - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.dma_mapping_err++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.dma_mapping_err, 1, + &tx_ring->syncp); netif_warn(adapter, tx_queued, adapter->netdev, "Failed to map skb\n"); tx_info->skb = NULL; @@ -3008,9 +2988,8 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev) __func__, qid); netif_tx_stop_queue(txq); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.queue_stop++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.queue_stop, 1, + &tx_ring->syncp); /* There is a rare condition where this function decide to * stop the queue but meanwhile clean_tx_irq updates @@ -3025,9 +3004,8 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev) if (ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, ENA_TX_WAKEUP_THRESH)) { netif_tx_wake_queue(txq); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.queue_wakeup++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.queue_wakeup, 1, + &tx_ring->syncp); } } @@ -3036,9 +3014,8 @@ static netdev_tx_t ena_start_xmit(struct sk_buff *skb, struct net_device *dev) * has a mb */ ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.doorbells++; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.doorbells, 1, + &tx_ring->syncp); } return NETDEV_TX_OK; @@ -3673,9 +3650,8 @@ static int check_missing_comp_in_tx_queue(struct ena_adapter *adapter, rc = -EIO; } - u64_stats_update_begin(&tx_ring->syncp); - tx_ring->tx_stats.missed_tx += missed_tx; - u64_stats_update_end(&tx_ring->syncp); + ena_increase_stat(&tx_ring->tx_stats.missed_tx , missed_tx, + &tx_ring->syncp); return rc; } @@ -3758,9 +3734,8 @@ static void check_for_empty_rx_ring(struct ena_adapter *adapter) rx_ring->empty_rx_queue++; if (rx_ring->empty_rx_queue >= EMPTY_RX_REFILL) { - u64_stats_update_begin(&rx_ring->syncp); - rx_ring->rx_stats.empty_rx_ring++; - u64_stats_update_end(&rx_ring->syncp); + ena_increase_stat(&rx_ring->rx_stats.empty_rx_ring, 1, + &rx_ring->syncp); netif_err(adapter, drv, adapter->netdev, "Trigger refill for ring %d\n", i); @@ -3790,9 +3765,8 @@ static void check_for_missing_keep_alive(struct ena_adapter *adapter) if (unlikely(time_is_before_jiffies(keep_alive_expired))) { netif_err(adapter, drv, adapter->netdev, "Keep alive watchdog timeout.\n"); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.wd_expired++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.wd_expired, 1, + &adapter->syncp); adapter->reset_reason = ENA_REGS_RESET_KEEP_ALIVE_TO; set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); } @@ -3803,9 +3777,8 @@ static void check_for_admin_com_state(struct ena_adapter *adapter) if (unlikely(!ena_com_get_admin_running_state(adapter->ena_dev))) { netif_err(adapter, drv, adapter->netdev, "ENA admin queue is not in running state!\n"); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.admin_q_pause++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.admin_q_pause, 1, + &adapter->syncp); adapter->reset_reason = ENA_REGS_RESET_ADMIN_TO; set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); } @@ -4441,9 +4414,7 @@ static int __maybe_unused ena_suspend(struct device *dev_d) struct pci_dev *pdev = to_pci_dev(dev_d); struct ena_adapter *adapter = pci_get_drvdata(pdev); - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.suspend++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.suspend, 1, &adapter->syncp); rtnl_lock(); if (unlikely(test_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags))) { @@ -4464,9 +4435,7 @@ static int __maybe_unused ena_resume(struct device *dev_d) struct ena_adapter *adapter = dev_get_drvdata(dev_d); int rc; - u64_stats_update_begin(&adapter->syncp); - adapter->dev_stats.resume++; - u64_stats_update_end(&adapter->syncp); + ena_increase_stat(&adapter->dev_stats.resume, 1, &adapter->syncp); rtnl_lock(); rc = ena_restore_device(adapter); From patchwork Thu Nov 26 21:20:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kiyanovski, Arthur" X-Patchwork-Id: 11934795 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C75AC6379D for ; Thu, 26 Nov 2020 21:21:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 26A6A21D1A for ; Thu, 26 Nov 2020 21:21:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="hp5BmmJN" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387835AbgKZVU6 (ORCPT ); Thu, 26 Nov 2020 16:20:58 -0500 Received: from smtp-fw-6002.amazon.com ([52.95.49.90]:42684 "EHLO smtp-fw-6002.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726178AbgKZVU5 (ORCPT ); Thu, 26 Nov 2020 16:20:57 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1606425655; x=1637961655; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=iIu/SjAKY+rivIJHrG24oAT4EVz42RQJPwmI6EHyvkM=; b=hp5BmmJN3OImFti//okPOMl7lYSqif4xf9q/teDnx61EVTvnkCLhY4oP JCp57HKTDs/4KVxjspgBUFw+yzzSkb9rpPHfGmG3CoD5QNKeWEKH5Wob9 u9dnSBJsA+pVPwmRteV6nI57mC78OTZQYf1rG7GBdURJ0PRgIpAUz6uEM 4=; X-IronPort-AV: E=Sophos;i="5.78,373,1599523200"; d="scan'208";a="67544134" Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-1a-807d4a99.us-east-1.amazon.com) ([10.43.8.2]) by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP; 26 Nov 2020 21:20:55 +0000 Received: from EX13MTAUEE001.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34]) by email-inbound-relay-1a-807d4a99.us-east-1.amazon.com (Postfix) with ESMTPS id CE794A1FC4; Thu, 26 Nov 2020 21:20:54 +0000 (UTC) Received: from EX13D08UEB004.ant.amazon.com (10.43.60.142) by EX13MTAUEE001.ant.amazon.com (10.43.62.200) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:45 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D08UEB004.ant.amazon.com (10.43.60.142) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:45 +0000 Received: from HFA15-G63729NC.amazon.com (10.1.213.20) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Nov 2020 21:20:42 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , , , Subject: [RFC PATCH V2 net-next 6/9] net: ena: use xdp_frame in XDP TX flow Date: Thu, 26 Nov 2020 23:20:14 +0200 Message-ID: <1606425617-13112-7-git-send-email-akiyano@amazon.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606425617-13112-1-git-send-email-akiyano@amazon.com> References: <1606425617-13112-1-git-send-email-akiyano@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Arthur Kiyanovski Rename the ena_xdp_xmit_buff() function to ena_xdp_xmit_frame() and pass it an xdp_frame struct instead of xdp_buff. This change lays the ground for XDP redirect implementation which uses xdp_frames when 'xmit'ing packets. Signed-off-by: Shay Agroskin Signed-off-by: Arthur Kiyanovski --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 46 ++++++++++---------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 9c3e0e3e33b5..873964654a57 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -233,18 +233,18 @@ static int ena_xdp_io_poll(struct napi_struct *napi, int budget) return ret; } -static int ena_xdp_tx_map_buff(struct ena_ring *xdp_ring, - struct ena_tx_buffer *tx_info, - struct xdp_buff *xdp, - void **push_hdr, - u32 *push_len) +static int ena_xdp_tx_map_frame(struct ena_ring *xdp_ring, + struct ena_tx_buffer *tx_info, + struct xdp_frame *xdpf, + void **push_hdr, + u32 *push_len) { struct ena_adapter *adapter = xdp_ring->adapter; struct ena_com_buf *ena_buf; dma_addr_t dma = 0; u32 size; - tx_info->xdpf = xdp_convert_buff_to_frame(xdp); + tx_info->xdpf = xdpf; size = tx_info->xdpf->len; ena_buf = tx_info->bufs; @@ -281,29 +281,31 @@ static int ena_xdp_tx_map_buff(struct ena_ring *xdp_ring, return -EINVAL; } -static int ena_xdp_xmit_buff(struct net_device *dev, - struct xdp_buff *xdp, - int qid, - struct ena_rx_buffer *rx_info) +static int ena_xdp_xmit_frame(struct net_device *dev, + struct xdp_frame *xdpf, + int qid) { struct ena_adapter *adapter = netdev_priv(dev); struct ena_com_tx_ctx ena_tx_ctx = {}; struct ena_tx_buffer *tx_info; struct ena_ring *xdp_ring; + struct page *rx_buff_page; u16 next_to_use, req_id; int rc; void *push_hdr; u32 push_len; + rx_buff_page = virt_to_page(xdpf->data); + xdp_ring = &adapter->tx_ring[qid]; next_to_use = xdp_ring->next_to_use; req_id = xdp_ring->free_ids[next_to_use]; tx_info = &xdp_ring->tx_buffer_info[req_id]; tx_info->num_of_bufs = 0; - page_ref_inc(rx_info->page); - tx_info->xdp_rx_page = rx_info->page; + page_ref_inc(rx_buff_page); + tx_info->xdp_rx_page = rx_buff_page; - rc = ena_xdp_tx_map_buff(xdp_ring, tx_info, xdp, &push_hdr, &push_len); + rc = ena_xdp_tx_map_frame(xdp_ring, tx_info, xdpf, &push_hdr, &push_len); if (unlikely(rc)) goto error_drop_packet; @@ -318,7 +320,7 @@ static int ena_xdp_xmit_buff(struct net_device *dev, tx_info, &ena_tx_ctx, next_to_use, - xdp->data_end - xdp->data); + xdpf->len); if (rc) goto error_unmap_dma; /* trigger the dma engine. ena_com_write_sq_doorbell() @@ -337,12 +339,11 @@ static int ena_xdp_xmit_buff(struct net_device *dev, return NETDEV_TX_OK; } -static int ena_xdp_execute(struct ena_ring *rx_ring, - struct xdp_buff *xdp, - struct ena_rx_buffer *rx_info) +static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp) { struct bpf_prog *xdp_prog; u32 verdict = XDP_PASS; + struct xdp_frame *xdpf; u64 *xdp_stat; rcu_read_lock(); @@ -354,10 +355,9 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, verdict = bpf_prog_run_xdp(xdp_prog, xdp); if (verdict == XDP_TX) { - ena_xdp_xmit_buff(rx_ring->netdev, - xdp, - rx_ring->qid + rx_ring->adapter->num_io_queues, - rx_info); + xdpf = xdp_convert_buff_to_frame(xdp); + ena_xdp_xmit_frame(rx_ring->netdev, xdpf, + rx_ring->qid + rx_ring->adapter->num_io_queues); xdp_stat = &rx_ring->rx_stats.xdp_tx; } else if (unlikely(verdict == XDP_ABORTED)) { @@ -1521,7 +1521,7 @@ static int ena_xdp_handle_buff(struct ena_ring *rx_ring, struct xdp_buff *xdp) if (unlikely(rx_ring->ena_bufs[0].len > ENA_XDP_MAX_MTU)) return XDP_DROP; - ret = ena_xdp_execute(rx_ring, xdp, rx_info); + ret = ena_xdp_execute(rx_ring, xdp); /* The xdp program might expand the headers */ if (ret == XDP_PASS) { @@ -1600,7 +1600,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, if (unlikely(!skb)) { /* The page might not actually be freed here since the * page reference count is incremented in - * ena_xdp_xmit_buff(), and it will be decreased only + * ena_xdp_xmit_frame(), and it will be decreased only * when send completion was received from the device */ if (xdp_verdict == XDP_TX) From patchwork Thu Nov 26 21:20:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kiyanovski, Arthur" X-Patchwork-Id: 11934797 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9258CC64E7A for ; Thu, 26 Nov 2020 21:21:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51B7721D93 for ; Thu, 26 Nov 2020 21:21:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="E0oUX6+Q" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387913AbgKZVVB (ORCPT ); Thu, 26 Nov 2020 16:21:01 -0500 Received: from smtp-fw-9102.amazon.com ([207.171.184.29]:36819 "EHLO smtp-fw-9102.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726178AbgKZVVB (ORCPT ); Thu, 26 Nov 2020 16:21:01 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1606425660; x=1637961660; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=udjCjM0X9/5RgyUKE8rzA9Xxz2p5jcmIOH9OWm5eDCU=; b=E0oUX6+Qz/t8mI2nDtsL8J2iFeSpBcOVyw3uzX3mTodFnax9xWk1kmJ7 lDrshKOqNYpX8SmZbqEoDN4KfEUXDPfLKj2zGM6dvY0KAr+mkXVR0m5Bv ZCHAZBhgx2cynTqiMRs7yqwTeVACmi4rMxEg8dikmknqN3a3ZzIn2Y3tW Y=; X-IronPort-AV: E=Sophos;i="5.78,373,1599523200"; d="scan'208";a="99555507" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1a-715bee71.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 26 Nov 2020 21:20:59 +0000 Received: from EX13MTAUEE001.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34]) by email-inbound-relay-1a-715bee71.us-east-1.amazon.com (Postfix) with ESMTPS id 13681A49B8; Thu, 26 Nov 2020 21:20:58 +0000 (UTC) Received: from EX13D08UEE004.ant.amazon.com (10.43.62.182) by EX13MTAUEE001.ant.amazon.com (10.43.62.226) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:49 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D08UEE004.ant.amazon.com (10.43.62.182) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:49 +0000 Received: from HFA15-G63729NC.amazon.com (10.1.213.20) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Nov 2020 21:20:46 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , , , Subject: [RFC PATCH V2 net-next 7/9] net: ena: introduce XDP redirect implementation Date: Thu, 26 Nov 2020 23:20:15 +0200 Message-ID: <1606425617-13112-8-git-send-email-akiyano@amazon.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606425617-13112-1-git-send-email-akiyano@amazon.com> References: <1606425617-13112-1-git-send-email-akiyano@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Arthur Kiyanovski This patch adds a partial support for the XDP_REDIRECT directive which instructs the driver to pass the packet to an interface specified by the program. The directive is passed to the driver by calling bpf_redirect() or bpf_redirect_map() functions from the eBPF program. To lay the ground for integration with the existing XDP TX implementation the patch removes the redundant page ref count increase in ena_xdp_xmit_frame() and then decrease in ena_clean_rx_irq(). Instead it only DMA unmaps descriptors for which XDP TX or REDIRECT directive was received. The XDP Redirect support is still missing .ndo_xdp_xmit function implementation, which allows to redirect packet to an ENA interface, which would be added in a later patch. Signed-off-by: Shay Agroskin Signed-off-by: Arthur Kiyanovski --- drivers/net/ethernet/amazon/ena/ena_ethtool.c | 1 + drivers/net/ethernet/amazon/ena/ena_netdev.c | 74 +++++++++++-------- drivers/net/ethernet/amazon/ena/ena_netdev.h | 1 + 3 files changed, 47 insertions(+), 29 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_ethtool.c b/drivers/net/ethernet/amazon/ena/ena_ethtool.c index 2ad44ae74cf6..d6cc7aa612b7 100644 --- a/drivers/net/ethernet/amazon/ena/ena_ethtool.c +++ b/drivers/net/ethernet/amazon/ena/ena_ethtool.c @@ -95,6 +95,7 @@ static const struct ena_stats ena_stats_rx_strings[] = { ENA_STAT_RX_ENTRY(xdp_pass), ENA_STAT_RX_ENTRY(xdp_tx), ENA_STAT_RX_ENTRY(xdp_invalid), + ENA_STAT_RX_ENTRY(xdp_redirect), }; static const struct ena_stats ena_stats_ena_com_strings[] = { diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 873964654a57..545b76004fd9 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -289,21 +289,17 @@ static int ena_xdp_xmit_frame(struct net_device *dev, struct ena_com_tx_ctx ena_tx_ctx = {}; struct ena_tx_buffer *tx_info; struct ena_ring *xdp_ring; - struct page *rx_buff_page; u16 next_to_use, req_id; int rc; void *push_hdr; u32 push_len; - rx_buff_page = virt_to_page(xdpf->data); - xdp_ring = &adapter->tx_ring[qid]; next_to_use = xdp_ring->next_to_use; req_id = xdp_ring->free_ids[next_to_use]; tx_info = &xdp_ring->tx_buffer_info[req_id]; tx_info->num_of_bufs = 0; - page_ref_inc(rx_buff_page); - tx_info->xdp_rx_page = rx_buff_page; + tx_info->xdp_rx_page = virt_to_page(xdpf->data); rc = ena_xdp_tx_map_frame(xdp_ring, tx_info, xdpf, &push_hdr, &push_len); if (unlikely(rc)) @@ -335,7 +331,7 @@ static int ena_xdp_xmit_frame(struct net_device *dev, ena_unmap_tx_buff(xdp_ring, tx_info); tx_info->xdpf = NULL; error_drop_packet: - __free_page(tx_info->xdp_rx_page); + xdp_return_frame(xdpf); return NETDEV_TX_OK; } @@ -354,20 +350,28 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp) verdict = bpf_prog_run_xdp(xdp_prog, xdp); - if (verdict == XDP_TX) { + switch (verdict) { + case XDP_TX: xdpf = xdp_convert_buff_to_frame(xdp); ena_xdp_xmit_frame(rx_ring->netdev, xdpf, rx_ring->qid + rx_ring->adapter->num_io_queues); - xdp_stat = &rx_ring->rx_stats.xdp_tx; - } else if (unlikely(verdict == XDP_ABORTED)) { + break; + case XDP_REDIRECT: + xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); + xdp_stat = &rx_ring->rx_stats.xdp_redirect; + break; + case XDP_ABORTED: trace_xdp_exception(rx_ring->netdev, xdp_prog, verdict); xdp_stat = &rx_ring->rx_stats.xdp_aborted; - } else if (unlikely(verdict == XDP_DROP)) { + break; + case XDP_DROP: xdp_stat = &rx_ring->rx_stats.xdp_drop; - } else if (unlikely(verdict == XDP_PASS)) { + break; + case XDP_PASS: xdp_stat = &rx_ring->rx_stats.xdp_pass; - } else { + break; + default: bpf_warn_invalid_xdp_action(verdict); xdp_stat = &rx_ring->rx_stats.xdp_invalid; } @@ -953,11 +957,20 @@ static int ena_alloc_rx_page(struct ena_ring *rx_ring, return 0; } +static void ena_unmap_rx_buff(struct ena_ring *rx_ring, + struct ena_rx_buffer *rx_info) +{ + struct ena_com_buf *ena_buf = &rx_info->ena_buf; + + dma_unmap_page(rx_ring->dev, ena_buf->paddr - rx_ring->rx_headroom, + ENA_PAGE_SIZE, + DMA_BIDIRECTIONAL); +} + static void ena_free_rx_page(struct ena_ring *rx_ring, struct ena_rx_buffer *rx_info) { struct page *page = rx_info->page; - struct ena_com_buf *ena_buf = &rx_info->ena_buf; if (unlikely(!page)) { netif_warn(rx_ring->adapter, rx_err, rx_ring->netdev, @@ -965,9 +978,7 @@ static void ena_free_rx_page(struct ena_ring *rx_ring, return; } - dma_unmap_page(rx_ring->dev, ena_buf->paddr - rx_ring->rx_headroom, - ENA_PAGE_SIZE, - DMA_BIDIRECTIONAL); + ena_unmap_rx_buff(rx_ring, rx_info); __free_page(page); rx_info->page = NULL; @@ -1391,9 +1402,7 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring, return NULL; do { - dma_unmap_page(rx_ring->dev, - dma_unmap_addr(&rx_info->ena_buf, paddr), - ENA_PAGE_SIZE, DMA_BIDIRECTIONAL); + ena_unmap_rx_buff(rx_ring, rx_info); skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_info->page, rx_info->page_offset, len, ENA_PAGE_SIZE); @@ -1551,6 +1560,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, struct sk_buff *skb; int refill_required; struct xdp_buff xdp; + int xdp_flags = 0; int total_len = 0; int xdp_verdict; int rc = 0; @@ -1598,22 +1608,25 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, &next_to_clean); if (unlikely(!skb)) { - /* The page might not actually be freed here since the - * page reference count is incremented in - * ena_xdp_xmit_frame(), and it will be decreased only - * when send completion was received from the device - */ - if (xdp_verdict == XDP_TX) - ena_free_rx_page(rx_ring, - &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id]); for (i = 0; i < ena_rx_ctx.descs; i++) { - rx_ring->free_ids[next_to_clean] = - rx_ring->ena_bufs[i].req_id; + int req_id = rx_ring->ena_bufs[i].req_id; + + rx_ring->free_ids[next_to_clean] = req_id; next_to_clean = ENA_RX_RING_IDX_NEXT(next_to_clean, rx_ring->ring_size); + + /* Packets was passed for transmission, unmap it + * from RX side. + */ + if (xdp_verdict == XDP_TX || xdp_verdict == XDP_REDIRECT) { + ena_unmap_rx_buff(rx_ring, + &rx_ring->rx_buffer_info[req_id]); + rx_ring->rx_buffer_info[req_id].page = NULL; + } } if (xdp_verdict != XDP_PASS) { + xdp_flags |= xdp_verdict; res_budget--; continue; } @@ -1659,6 +1672,9 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, ena_refill_rx_bufs(rx_ring, refill_required); } + if (xdp_flags & XDP_REDIRECT) + xdp_do_flush_map(); + return work_done; error: diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index c39f41711c31..0fef876c23eb 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -239,6 +239,7 @@ struct ena_stats_rx { u64 xdp_pass; u64 xdp_tx; u64 xdp_invalid; + u64 xdp_redirect; }; struct ena_ring { From patchwork Thu Nov 26 21:20:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kiyanovski, Arthur" X-Patchwork-Id: 11934791 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 385DAC63777 for ; Thu, 26 Nov 2020 21:21:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EF99621D93 for ; Thu, 26 Nov 2020 21:21:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="g0bzeNTB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387795AbgKZVU5 (ORCPT ); Thu, 26 Nov 2020 16:20:57 -0500 Received: from smtp-fw-2101.amazon.com ([72.21.196.25]:52491 "EHLO smtp-fw-2101.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387781AbgKZVU4 (ORCPT ); Thu, 26 Nov 2020 16:20:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1606425656; x=1637961656; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=0Gs1uWeXJOpzPlOvZxS0bAwzyonTfQLHT9AmWSBQHVM=; b=g0bzeNTBT3Z0b25xstsLHdjcKaZq0Dtz+mtnCU3BAvoS1OfYvxPNcW+0 BWTC+HE2/TMLyNyHknPnVnHdhNi9HD/MkdCL4EAKmtYxN+e/MFrS8YlgI qNJCFvVdiUOwHMq4yBAqsJBZvqhNWywgW9ic2yD8xLmikEE0OODNFyHpH Y=; X-IronPort-AV: E=Sophos;i="5.78,373,1599523200"; d="scan'208";a="66001640" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com) ([10.43.8.6]) by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 26 Nov 2020 21:20:56 +0000 Received: from EX13MTAUEB002.ant.amazon.com (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198]) by email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com (Postfix) with ESMTPS id 7B27DA21A6; Thu, 26 Nov 2020 21:20:54 +0000 (UTC) Received: from EX13D08UEB002.ant.amazon.com (10.43.60.107) by EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:52 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D08UEB002.ant.amazon.com (10.43.60.107) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:52 +0000 Received: from HFA15-G63729NC.amazon.com (10.1.213.20) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Nov 2020 21:20:49 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , , , Subject: [RFC PATCH V2 net-next 8/9] net: ena: use xdp_return_frame() to free xdp frames Date: Thu, 26 Nov 2020 23:20:16 +0200 Message-ID: <1606425617-13112-9-git-send-email-akiyano@amazon.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606425617-13112-1-git-send-email-akiyano@amazon.com> References: <1606425617-13112-1-git-send-email-akiyano@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Arthur Kiyanovski XDP subsystem has a function to free XDP frames and their associated pages. Using this function would help the driver's XDP implementation to adjust to new changes in the XDP subsystem in the kernel (e.g. introduction of XDP MB). Also, remove 'xdp_rx_page' field from ena_tx_buffer struct since it is no longer used. Signed-off-by: Shay Agroskin Signed-off-by: Arthur Kiyanovski --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 3 +-- drivers/net/ethernet/amazon/ena/ena_netdev.h | 6 ------ 2 files changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 545b76004fd9..6c8767dce400 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -299,7 +299,6 @@ static int ena_xdp_xmit_frame(struct net_device *dev, req_id = xdp_ring->free_ids[next_to_use]; tx_info = &xdp_ring->tx_buffer_info[req_id]; tx_info->num_of_bufs = 0; - tx_info->xdp_rx_page = virt_to_page(xdpf->data); rc = ena_xdp_tx_map_frame(xdp_ring, tx_info, xdpf, &push_hdr, &push_len); if (unlikely(rc)) @@ -1828,7 +1827,7 @@ static int ena_clean_xdp_irq(struct ena_ring *xdp_ring, u32 budget) tx_pkts++; total_done += tx_info->tx_descs; - __free_page(tx_info->xdp_rx_page); + xdp_return_frame(xdpf); xdp_ring->free_ids[next_to_clean] = req_id; next_to_clean = ENA_TX_RING_IDX_NEXT(next_to_clean, xdp_ring->ring_size); diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index 0fef876c23eb..fed79c50a870 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -170,12 +170,6 @@ struct ena_tx_buffer { * the xdp queues */ struct xdp_frame *xdpf; - /* The rx page for the rx buffer that was received in rx and - * re transmitted on xdp tx queues as a result of XDP_TX action. - * We need to free the page once we finished cleaning the buffer in - * clean_xdp_irq() - */ - struct page *xdp_rx_page; /* Indicate if bufs[0] map the linear data of the skb. */ u8 map_linear_data; From patchwork Thu Nov 26 21:20:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kiyanovski, Arthur" X-Patchwork-Id: 11934799 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE977C63777 for ; Thu, 26 Nov 2020 21:21:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A037021D1A for ; Thu, 26 Nov 2020 21:21:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="PgY3XCKF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387930AbgKZVVO (ORCPT ); Thu, 26 Nov 2020 16:21:14 -0500 Received: from smtp-fw-9102.amazon.com ([207.171.184.29]:36846 "EHLO smtp-fw-9102.amazon.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726178AbgKZVVO (ORCPT ); Thu, 26 Nov 2020 16:21:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1606425673; x=1637961673; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=zrVt78S0z1/DqkaUtYr+2mTuntVrcOpMGMitjaTWefM=; b=PgY3XCKFo2okTH+2P8toY1axCZo6p7kSacQLGGmFOLV/T4JaENpM0jaA hr1ZRpxjCCQXhO5v80JRkCmYTowD6rM9DDtjd2a0jENSOXZQSyvT5m685 v5EBHNwiMsI4Ghn0biEUwkAEH0GRBur1boCWitpveQ9fVZV22Hjs4t7Vx 4=; X-IronPort-AV: E=Sophos;i="5.78,373,1599523200"; d="scan'208";a="99555527" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 26 Nov 2020 21:21:13 +0000 Received: from EX13MTAUEE001.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38]) by email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com (Postfix) with ESMTPS id 86AC9A18F1; Thu, 26 Nov 2020 21:21:12 +0000 (UTC) Received: from EX13D08UEE004.ant.amazon.com (10.43.62.182) by EX13MTAUEE001.ant.amazon.com (10.43.62.226) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:57 +0000 Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by EX13D08UEE004.ant.amazon.com (10.43.62.182) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 26 Nov 2020 21:20:57 +0000 Received: from HFA15-G63729NC.amazon.com (10.1.213.20) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 26 Nov 2020 21:20:53 +0000 From: To: , CC: Arthur Kiyanovski , , , , , , , , , , , , , , Subject: [RFC PATCH V2 net-next 9/9] net: ena: introduce ndo_xdp_xmit() function for XDP_REDIRECT Date: Thu, 26 Nov 2020 23:20:17 +0200 Message-ID: <1606425617-13112-10-git-send-email-akiyano@amazon.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1606425617-13112-1-git-send-email-akiyano@amazon.com> References: <1606425617-13112-1-git-send-email-akiyano@amazon.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC From: Arthur Kiyanovski This patch implements the ndo_xdp_xmit() net_device function which is called when a packet is redirected to this driver using an XDP_REDIRECT directive. The function receives an array of xdp frames that it needs to xmit. The TX queues that are used to xmit these frames are the XDP queues used by the XDP_TX flow. Therefore a lock is added to synchronize both flows (XDP_TX and XDP_REDIRECT). Signed-off-by: Shay Agroskin Signed-off-by: Arthur Kiyanovski --- drivers/net/ethernet/amazon/ena/ena_netdev.c | 83 +++++++++++++++++--- drivers/net/ethernet/amazon/ena/ena_netdev.h | 1 + 2 files changed, 72 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index 6c8767dce400..89197b52e828 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -281,20 +281,18 @@ static int ena_xdp_tx_map_frame(struct ena_ring *xdp_ring, return -EINVAL; } -static int ena_xdp_xmit_frame(struct net_device *dev, +static int ena_xdp_xmit_frame(struct ena_ring *xdp_ring, + struct net_device *dev, struct xdp_frame *xdpf, - int qid) + int flags) { - struct ena_adapter *adapter = netdev_priv(dev); struct ena_com_tx_ctx ena_tx_ctx = {}; struct ena_tx_buffer *tx_info; - struct ena_ring *xdp_ring; u16 next_to_use, req_id; - int rc; void *push_hdr; u32 push_len; + int rc; - xdp_ring = &adapter->tx_ring[qid]; next_to_use = xdp_ring->next_to_use; req_id = xdp_ring->free_ids[next_to_use]; tx_info = &xdp_ring->tx_buffer_info[req_id]; @@ -321,25 +319,76 @@ static int ena_xdp_xmit_frame(struct net_device *dev, /* trigger the dma engine. ena_com_write_sq_doorbell() * has a mb */ - ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq); - ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, &xdp_ring->syncp); + if (flags & XDP_XMIT_FLUSH) { + ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq); + ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, + &xdp_ring->syncp); + } - return NETDEV_TX_OK; + return rc; error_unmap_dma: ena_unmap_tx_buff(xdp_ring, tx_info); tx_info->xdpf = NULL; error_drop_packet: xdp_return_frame(xdpf); - return NETDEV_TX_OK; + return rc; +} + +static int ena_xdp_xmit(struct net_device *dev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct ena_adapter *adapter = netdev_priv(dev); + int qid, i, err, drops = 0; + struct ena_ring *xdp_ring; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + if (!test_bit(ENA_FLAG_DEV_UP, &adapter->flags)) + return -ENETDOWN; + + /* We assume that all rings have the same XDP program */ + if (!READ_ONCE(adapter->rx_ring->xdp_bpf_prog)) + return -ENXIO; + + qid = smp_processor_id() % adapter->xdp_num_queues; + qid += adapter->xdp_first_ring; + xdp_ring = &adapter->tx_ring[qid]; + + /* Other CPU ids might try to send thorugh this queue */ + spin_lock(&xdp_ring->xdp_tx_lock); + + for (i = 0; i < n; i++) { + err = ena_xdp_xmit_frame(xdp_ring, dev, frames[i], 0); + /* The descriptor is freed by ena_xdp_xmit_frame in case + * of an error. + */ + if (err) + drops++; + } + + /* Ring doorbell to make device aware of the packets */ + if (flags & XDP_XMIT_FLUSH) { + ena_com_write_sq_doorbell(xdp_ring->ena_com_io_sq); + ena_increase_stat(&xdp_ring->tx_stats.doorbells, 1, + &xdp_ring->syncp); + } + + spin_unlock(&xdp_ring->xdp_tx_lock); + + /* Return number of packets sent */ + return n - drops; } static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp) { struct bpf_prog *xdp_prog; + struct ena_ring *xdp_ring; u32 verdict = XDP_PASS; struct xdp_frame *xdpf; u64 *xdp_stat; + int qid; rcu_read_lock(); xdp_prog = READ_ONCE(rx_ring->xdp_bpf_prog); @@ -352,8 +401,16 @@ static int ena_xdp_execute(struct ena_ring *rx_ring, struct xdp_buff *xdp) switch (verdict) { case XDP_TX: xdpf = xdp_convert_buff_to_frame(xdp); - ena_xdp_xmit_frame(rx_ring->netdev, xdpf, - rx_ring->qid + rx_ring->adapter->num_io_queues); + /* Find xmit queue */ + qid = rx_ring->qid + rx_ring->adapter->num_io_queues; + xdp_ring = &rx_ring->adapter->tx_ring[qid]; + + /* The XDP queues are shared between XDP_TX and XDP_REDIRECT */ + spin_lock(&xdp_ring->xdp_tx_lock); + + ena_xdp_xmit_frame(xdp_ring, rx_ring->netdev, xdpf, XDP_XMIT_FLUSH); + + spin_unlock(&xdp_ring->xdp_tx_lock); xdp_stat = &rx_ring->rx_stats.xdp_tx; break; case XDP_REDIRECT: @@ -644,6 +701,7 @@ static void ena_init_io_rings(struct ena_adapter *adapter, txr->smoothed_interval = ena_com_get_nonadaptive_moderation_interval_tx(ena_dev); txr->disable_meta_caching = adapter->disable_meta_caching; + spin_lock_init(&txr->xdp_tx_lock); /* Don't init RX queues for xdp queues */ if (!ENA_IS_XDP_INDEX(adapter, i)) { @@ -3236,6 +3294,7 @@ static const struct net_device_ops ena_netdev_ops = { .ndo_set_mac_address = NULL, .ndo_validate_addr = eth_validate_addr, .ndo_bpf = ena_xdp, + .ndo_xdp_xmit = ena_xdp_xmit, }; static int ena_device_validate_params(struct ena_adapter *adapter, diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h index fed79c50a870..74af15d62ee1 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.h +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h @@ -258,6 +258,7 @@ struct ena_ring { struct ena_com_io_sq *ena_com_io_sq; struct bpf_prog *xdp_bpf_prog; struct xdp_rxq_info xdp_rxq; + spinlock_t xdp_tx_lock; /* synchronize XDP TX/Redirect traffic */ u16 next_to_use; u16 next_to_clean;