From patchwork Thu Sep 29 17:09:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salil Mehta X-Patchwork-Id: 9356783 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 871BF6077B for ; Thu, 29 Sep 2016 17:11:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 713A929AE6 for ; Thu, 29 Sep 2016 17:11:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6457A29BD0; Thu, 29 Sep 2016 17:11:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB90C29AE6 for ; Thu, 29 Sep 2016 17:11:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755660AbcI2RLQ (ORCPT ); Thu, 29 Sep 2016 13:11:16 -0400 Received: from szxga01-in.huawei.com ([58.251.152.64]:53806 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934358AbcI2RLK (ORCPT ); Thu, 29 Sep 2016 13:11:10 -0400 Received: from 172.24.1.137 (EHLO szxeml428-hub.china.huawei.com) ([172.24.1.137]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DRU52808; Fri, 30 Sep 2016 01:11:06 +0800 (CST) Received: from S00293818-DELL1.china.huawei.com (10.203.181.152) by szxeml428-hub.china.huawei.com (10.82.67.183) with Microsoft SMTP Server id 14.3.235.1; Fri, 30 Sep 2016 01:10:55 +0800 From: Salil Mehta To: CC: , , , , , , , , , Daode Huang Subject: [PATCH V2 for-next 3/8] net: hns: add fini_process for v2 napi process Date: Thu, 29 Sep 2016 18:09:11 +0100 Message-ID: <20160929170916.631972-4-salil.mehta@huawei.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20160929170916.631972-1-salil.mehta@huawei.com> References: <20160929170916.631972-1-salil.mehta@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.203.181.152] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Daode Huang This patch adds fini_process for v2, it handles the packets recevied by the hardware in the napi porcess. With this patch, the hardware irq numbers will drop 50% per sec. Signed-off-by: Daode Huang Reviewed-by: Yisen Zhuang Signed-off-by: Salil Mehta --- drivers/net/ethernet/hisilicon/hns/hns_enet.c | 45 +++++++++++++++++++++---- 1 file changed, 38 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c index 32ff270..e6bfc51 100644 --- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c +++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c @@ -823,6 +823,8 @@ static void hns_nic_rx_fini_pro(struct hns_nic_ring_data *ring_data) struct hnae_ring *ring = ring_data->ring; int num = 0; + ring_data->ring->q->handle->dev->ops->toggle_ring_irq(ring, 0); + /* for hardware bug fixed */ num = readl_relaxed(ring->io_base + RCB_REG_FBDNUM); @@ -834,6 +836,20 @@ static void hns_nic_rx_fini_pro(struct hns_nic_ring_data *ring_data) } } +static void hns_nic_rx_fini_pro_v2(struct hns_nic_ring_data *ring_data) +{ + struct hnae_ring *ring = ring_data->ring; + int num = 0; + + num = readl_relaxed(ring->io_base + RCB_REG_FBDNUM); + + if (num == 0) + ring_data->ring->q->handle->dev->ops->toggle_ring_irq( + ring, 0); + else + napi_schedule(&ring_data->napi); +} + static inline void hns_nic_reclaim_one_desc(struct hnae_ring *ring, int *bytes, int *pkts) { @@ -935,7 +951,11 @@ static int hns_nic_tx_poll_one(struct hns_nic_ring_data *ring_data, static void hns_nic_tx_fini_pro(struct hns_nic_ring_data *ring_data) { struct hnae_ring *ring = ring_data->ring; - int head = readl_relaxed(ring->io_base + RCB_REG_HEAD); + int head; + + ring_data->ring->q->handle->dev->ops->toggle_ring_irq(ring, 0); + + head = readl_relaxed(ring->io_base + RCB_REG_HEAD); if (head != ring->next_to_clean) { ring_data->ring->q->handle->dev->ops->toggle_ring_irq( @@ -945,6 +965,18 @@ static void hns_nic_tx_fini_pro(struct hns_nic_ring_data *ring_data) } } +static void hns_nic_tx_fini_pro_v2(struct hns_nic_ring_data *ring_data) +{ + struct hnae_ring *ring = ring_data->ring; + int head = readl_relaxed(ring->io_base + RCB_REG_HEAD); + + if (head == ring->next_to_clean) + ring_data->ring->q->handle->dev->ops->toggle_ring_irq( + ring, 0); + else + napi_schedule(&ring_data->napi); +} + static void hns_nic_tx_clr_all_bufs(struct hns_nic_ring_data *ring_data) { struct hnae_ring *ring = ring_data->ring; @@ -976,10 +1008,7 @@ static int hns_nic_common_poll(struct napi_struct *napi, int budget) if (clean_complete >= 0 && clean_complete < budget) { napi_complete(napi); - ring_data->ring->q->handle->dev->ops->toggle_ring_irq( - ring_data->ring, 0); - if (ring_data->fini_process) - ring_data->fini_process(ring_data); + ring_data->fini_process(ring_data); return 0; } @@ -1755,7 +1784,8 @@ static int hns_nic_init_ring_data(struct hns_nic_priv *priv) rd->queue_index = i; rd->ring = &h->qs[i]->tx_ring; rd->poll_one = hns_nic_tx_poll_one; - rd->fini_process = is_ver1 ? hns_nic_tx_fini_pro : NULL; + rd->fini_process = is_ver1 ? hns_nic_tx_fini_pro : + hns_nic_tx_fini_pro_v2; netif_napi_add(priv->netdev, &rd->napi, hns_nic_common_poll, NIC_TX_CLEAN_MAX_NUM); @@ -1767,7 +1797,8 @@ static int hns_nic_init_ring_data(struct hns_nic_priv *priv) rd->ring = &h->qs[i - h->q_num]->rx_ring; rd->poll_one = hns_nic_rx_poll_one; rd->ex_process = hns_nic_rx_up_pro; - rd->fini_process = is_ver1 ? hns_nic_rx_fini_pro : NULL; + rd->fini_process = is_ver1 ? hns_nic_rx_fini_pro : + hns_nic_rx_fini_pro_v2; netif_napi_add(priv->netdev, &rd->napi, hns_nic_common_poll, NIC_RX_CLEAN_MAX_NUM);