From patchwork Sat Sep 10 04:09:25 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Salil Mehta X-Patchwork-Id: 9324723 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7B7C56077F for ; Sat, 10 Sep 2016 04:11:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69A5729EE4 for ; Sat, 10 Sep 2016 04:11:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5C60829FB6; Sat, 10 Sep 2016 04:11:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CD44129EE4 for ; Sat, 10 Sep 2016 04:11:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754223AbcIJEKy (ORCPT ); Sat, 10 Sep 2016 00:10:54 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:29160 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750812AbcIJEKr (ORCPT ); Sat, 10 Sep 2016 00:10:47 -0400 Received: from 172.24.1.36 (EHLO szxeml432-hub.china.huawei.com) ([172.24.1.36]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DMX09476; Sat, 10 Sep 2016 12:10:42 +0800 (CST) Received: from S00293818-DELL1.china.huawei.com (10.47.131.166) by szxeml432-hub.china.huawei.com (10.82.67.209) with Microsoft SMTP Server id 14.3.235.1; Sat, 10 Sep 2016 12:10:32 +0800 From: Salil Mehta To: CC: , , , , , , , , Daode Huang Subject: [PATCH for-next 3/8] net: hns: add fini_process for v2 napi process Date: Sat, 10 Sep 2016 05:09:25 +0100 Message-ID: <20160910040930.25988-4-salil.mehta@huawei.com> X-Mailer: git-send-email 2.8.3 In-Reply-To: <20160910040930.25988-1-salil.mehta@huawei.com> References: <20160910040930.25988-1-salil.mehta@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.47.131.166] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020202.57D387C3.004B, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: ef86b34a3629a5dee5ec39a8eb281f07 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Daode Huang This patch adds fini_process for v2, it handles the packets recevied by the hardware in the napi porcess. With this patch, the hardware irq numbers will drop 50% per sec. Signed-off-by: Daode Huang Reviewed-by: Yisen Zhuang Signed-off-by: Salil Mehta --- drivers/net/ethernet/hisilicon/hns/hns_enet.c | 45 ++++++++++++++++++++++----- 1 file changed, 38 insertions(+), 7 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c index 32ff270..e6bfc51 100644 --- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c +++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c @@ -823,6 +823,8 @@ static void hns_nic_rx_fini_pro(struct hns_nic_ring_data *ring_data) struct hnae_ring *ring = ring_data->ring; int num = 0; + ring_data->ring->q->handle->dev->ops->toggle_ring_irq(ring, 0); + /* for hardware bug fixed */ num = readl_relaxed(ring->io_base + RCB_REG_FBDNUM); @@ -834,6 +836,20 @@ static void hns_nic_rx_fini_pro(struct hns_nic_ring_data *ring_data) } } +static void hns_nic_rx_fini_pro_v2(struct hns_nic_ring_data *ring_data) +{ + struct hnae_ring *ring = ring_data->ring; + int num = 0; + + num = readl_relaxed(ring->io_base + RCB_REG_FBDNUM); + + if (num == 0) + ring_data->ring->q->handle->dev->ops->toggle_ring_irq( + ring, 0); + else + napi_schedule(&ring_data->napi); +} + static inline void hns_nic_reclaim_one_desc(struct hnae_ring *ring, int *bytes, int *pkts) { @@ -935,7 +951,11 @@ static int hns_nic_tx_poll_one(struct hns_nic_ring_data *ring_data, static void hns_nic_tx_fini_pro(struct hns_nic_ring_data *ring_data) { struct hnae_ring *ring = ring_data->ring; - int head = readl_relaxed(ring->io_base + RCB_REG_HEAD); + int head; + + ring_data->ring->q->handle->dev->ops->toggle_ring_irq(ring, 0); + + head = readl_relaxed(ring->io_base + RCB_REG_HEAD); if (head != ring->next_to_clean) { ring_data->ring->q->handle->dev->ops->toggle_ring_irq( @@ -945,6 +965,18 @@ static void hns_nic_tx_fini_pro(struct hns_nic_ring_data *ring_data) } } +static void hns_nic_tx_fini_pro_v2(struct hns_nic_ring_data *ring_data) +{ + struct hnae_ring *ring = ring_data->ring; + int head = readl_relaxed(ring->io_base + RCB_REG_HEAD); + + if (head == ring->next_to_clean) + ring_data->ring->q->handle->dev->ops->toggle_ring_irq( + ring, 0); + else + napi_schedule(&ring_data->napi); +} + static void hns_nic_tx_clr_all_bufs(struct hns_nic_ring_data *ring_data) { struct hnae_ring *ring = ring_data->ring; @@ -976,10 +1008,7 @@ static int hns_nic_common_poll(struct napi_struct *napi, int budget) if (clean_complete >= 0 && clean_complete < budget) { napi_complete(napi); - ring_data->ring->q->handle->dev->ops->toggle_ring_irq( - ring_data->ring, 0); - if (ring_data->fini_process) - ring_data->fini_process(ring_data); + ring_data->fini_process(ring_data); return 0; } @@ -1755,7 +1784,8 @@ static int hns_nic_init_ring_data(struct hns_nic_priv *priv) rd->queue_index = i; rd->ring = &h->qs[i]->tx_ring; rd->poll_one = hns_nic_tx_poll_one; - rd->fini_process = is_ver1 ? hns_nic_tx_fini_pro : NULL; + rd->fini_process = is_ver1 ? hns_nic_tx_fini_pro : + hns_nic_tx_fini_pro_v2; netif_napi_add(priv->netdev, &rd->napi, hns_nic_common_poll, NIC_TX_CLEAN_MAX_NUM); @@ -1767,7 +1797,8 @@ static int hns_nic_init_ring_data(struct hns_nic_priv *priv) rd->ring = &h->qs[i - h->q_num]->rx_ring; rd->poll_one = hns_nic_rx_poll_one; rd->ex_process = hns_nic_rx_up_pro; - rd->fini_process = is_ver1 ? hns_nic_rx_fini_pro : NULL; + rd->fini_process = is_ver1 ? hns_nic_rx_fini_pro : + hns_nic_rx_fini_pro_v2; netif_napi_add(priv->netdev, &rd->napi, hns_nic_common_poll, NIC_RX_CLEAN_MAX_NUM);