From patchwork Thu Dec 17 06:26:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iyappan Subramanian X-Patchwork-Id: 7869741 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 58EE7BEEE5 for ; Thu, 17 Dec 2015 06:26:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5B5E0203F3 for ; Thu, 17 Dec 2015 06:26:31 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 66746203F1 for ; Thu, 17 Dec 2015 06:26:30 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a9RzD-0001Re-W5; Thu, 17 Dec 2015 06:24:24 +0000 Received: from mail-pa0-x22e.google.com ([2607:f8b0:400e:c03::22e]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1a9RzA-0001LG-Sz for linux-arm-kernel@lists.infradead.org; Thu, 17 Dec 2015 06:24:22 +0000 Received: by mail-pa0-x22e.google.com with SMTP id wq6so36782517pac.1 for ; Wed, 16 Dec 2015 22:23:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=apm.com; s=apm; h=from:to:cc:subject:date:message-id; bh=AD1uS88wRTuvA3r3AAzy0JIu2aLcji6FsBZlws6+SXA=; b=RK8GU/68CrXeWI2L3I7J/G6vzaKXio+/q/igc6ZtKVUQ0csyDYU0JAcIOtrrZJJlQ2 8incO5spXS3AuZysRHqJEvcqZ/53fjOqSMIYfOvR8D90LWk5CJRQIl2j+Of9IC6COOdW uRDuzvZG8mVsmqHAclPbBgeqgnN9T/GZ13yc0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=AD1uS88wRTuvA3r3AAzy0JIu2aLcji6FsBZlws6+SXA=; b=F7sD0Qa5wPTJRX2Hxx5zPjxfLQCeJ5O6jcBDu2uv5NsUm7xXhI63EeOQdVxnNWRn+B oGYeDxJPECJ+f2wRbEyNvqV3tTSgzui1vC+tDP/641s5Uq5+Wa0c9+af6ZA6Pwx3y44z W3l/oNSIBAN9co45qi7TgyG6sGZyyvWbCIoqNeIjs/BFawQJOto3uAolAOB47su31MxH 9f7/MA0D4OmCge4HrZERBGyNjM5nQmsEmrpR/0zczqj3EEIg2VRWemxTohBBA5FzWedc NJ5fhZm5XJCuBL7vabKcs0jL8slJN/lqgeR6R1nWcNl8ASANSKjGqj6g8JCvKmWmhYLd +ueA== X-Gm-Message-State: ALoCoQmT8Caszau/cm2lzqPykcIYWs85UwYujqd3WB3k5QlnU/3P76SX0S/mjv5DXnJI+MilMjMpHQtPKNTFrCnkOItGIA8c2w== X-Received: by 10.67.3.170 with SMTP id bx10mr31513219pad.34.1450333438721; Wed, 16 Dec 2015 22:23:58 -0800 (PST) Received: from isubrama-dev.amcc.com (70-35-53-82.static.wiline.com. [70.35.53.82]) by smtp.gmail.com with ESMTPSA id f2sm8670320pfj.30.2015.12.16.22.23.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 16 Dec 2015 22:23:57 -0800 (PST) From: Iyappan Subramanian To: davem@davemloft.net, netdev@vger.kernel.org Subject: [PATCH v2] drivers: net: xgene: fix Tx flow control Date: Wed, 16 Dec 2015 22:26:05 -0800 Message-Id: <1450333565-23670-1-git-send-email-isubramanian@apm.com> X-Mailer: git-send-email 1.9.1 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151216_222420_967089_E7BBC499 X-CRM114-Status: GOOD ( 16.30 ) X-Spam-Score: -2.7 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: patches@apm.com, linux-arm-kernel@lists.infradead.org, Iyappan Subramanian MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently the Tx flow control is based on reading the hardware state, which is not accurate since it may not reflect the descriptors that are not yet reached the memory. To accurately control the Tx flow, changing it to be software based. Signed-off-by: Iyappan Subramanian --- v2: Address v1 review comments - Removed use of atomic_t - Added tx_level and txc_level counters to pdata v1: Initial version --- drivers/net/ethernet/apm/xgene/xgene_enet_main.c | 38 ++++++++++++++---------- drivers/net/ethernet/apm/xgene/xgene_enet_main.h | 4 +-- 2 files changed, 24 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c index 9147a01..d0ae1a6 100644 --- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c +++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c @@ -289,6 +289,7 @@ static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring, struct sk_buff *skb) { struct device *dev = ndev_to_dev(tx_ring->ndev); + struct xgene_enet_pdata *pdata = netdev_priv(tx_ring->ndev); struct xgene_enet_raw_desc *raw_desc; __le64 *exp_desc = NULL, *exp_bufs = NULL; dma_addr_t dma_addr, pbuf_addr, *frag_dma_addr; @@ -419,6 +420,7 @@ out: raw_desc->m0 = cpu_to_le64(SET_VAL(LL, ll) | SET_VAL(NV, nv) | SET_VAL(USERINFO, tx_ring->tail)); tx_ring->cp_ring->cp_skb[tx_ring->tail] = skb; + pdata->tx_level += count; tx_ring->tail = tail; return count; @@ -429,14 +431,13 @@ static netdev_tx_t xgene_enet_start_xmit(struct sk_buff *skb, { struct xgene_enet_pdata *pdata = netdev_priv(ndev); struct xgene_enet_desc_ring *tx_ring = pdata->tx_ring; - struct xgene_enet_desc_ring *cp_ring = tx_ring->cp_ring; - u32 tx_level, cq_level; + u32 tx_level = pdata->tx_level; int count; - tx_level = pdata->ring_ops->len(tx_ring); - cq_level = pdata->ring_ops->len(cp_ring); - if (unlikely(tx_level > pdata->tx_qcnt_hi || - cq_level > pdata->cp_qcnt_hi)) { + if (tx_level < pdata->txc_level) + tx_level += ((typeof(pdata->tx_level))~0U); + + if ((tx_level - pdata->txc_level) > pdata->tx_qcnt_hi) { netif_stop_queue(ndev); return NETDEV_TX_BUSY; } @@ -539,10 +540,13 @@ static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring, struct xgene_enet_raw_desc *raw_desc, *exp_desc; u16 head = ring->head; u16 slots = ring->slots - 1; - int ret, count = 0, processed = 0; + int ret, desc_count, count = 0, processed = 0; + bool is_completion; do { raw_desc = &ring->raw_desc[head]; + desc_count = 0; + is_completion = false; exp_desc = NULL; if (unlikely(xgene_enet_is_desc_slot_empty(raw_desc))) break; @@ -559,18 +563,24 @@ static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring, } dma_rmb(); count++; + desc_count++; } - if (is_rx_desc(raw_desc)) + if (is_rx_desc(raw_desc)) { ret = xgene_enet_rx_frame(ring, raw_desc); - else + } else { ret = xgene_enet_tx_completion(ring, raw_desc); + is_completion = true; + } xgene_enet_mark_desc_slot_empty(raw_desc); if (exp_desc) xgene_enet_mark_desc_slot_empty(exp_desc); head = (head + 1) & slots; count++; + desc_count++; processed++; + if (is_completion) + pdata->txc_level += desc_count; if (ret) break; @@ -580,10 +590,8 @@ static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring, pdata->ring_ops->wr_cmd(ring, -count); ring->head = head; - if (netif_queue_stopped(ring->ndev)) { - if (pdata->ring_ops->len(ring) < pdata->cp_qcnt_low) - netif_wake_queue(ring->ndev); - } + if (netif_queue_stopped(ring->ndev)) + netif_start_queue(ring->ndev); } return processed; @@ -1033,9 +1041,7 @@ static int xgene_enet_create_desc_rings(struct net_device *ndev) pdata->tx_ring->cp_ring = cp_ring; pdata->tx_ring->dst_ring_num = xgene_enet_dst_ring_num(cp_ring); - pdata->tx_qcnt_hi = pdata->tx_ring->slots / 2; - pdata->cp_qcnt_hi = pdata->rx_ring->slots / 2; - pdata->cp_qcnt_low = pdata->cp_qcnt_hi / 2; + pdata->tx_qcnt_hi = pdata->tx_ring->slots - 128; return 0; diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.h b/drivers/net/ethernet/apm/xgene/xgene_enet_main.h index a6e56b8..1aa72c7 100644 --- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.h +++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.h @@ -155,11 +155,11 @@ struct xgene_enet_pdata { enum xgene_enet_id enet_id; struct xgene_enet_desc_ring *tx_ring; struct xgene_enet_desc_ring *rx_ring; + u16 tx_level; + u16 txc_level; char *dev_name; u32 rx_buff_cnt; u32 tx_qcnt_hi; - u32 cp_qcnt_hi; - u32 cp_qcnt_low; u32 rx_irq; u32 txc_irq; u8 cq_cnt;