From patchwork Wed Apr 15 12:30:05 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ding Tianhong X-Patchwork-Id: 6220061 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 1B778BF4A6 for ; Wed, 15 Apr 2015 12:35:17 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0B6A22014A for ; Wed, 15 Apr 2015 12:35:16 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0B567200CC for ; Wed, 15 Apr 2015 12:35:15 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiMVD-0007qS-H1; Wed, 15 Apr 2015 12:33:11 +0000 Received: from szxga01-in.huawei.com ([58.251.152.64]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YiMUR-0007U1-BI for linux-arm-kernel@lists.infradead.org; Wed, 15 Apr 2015 12:32:26 +0000 Received: from 172.24.2.119 (EHLO szxeml432-hub.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CME15262; Wed, 15 Apr 2015 20:30:18 +0800 (CST) Received: from localhost (10.177.22.246) by szxeml432-hub.china.huawei.com (10.82.67.209) with Microsoft SMTP Server id 14.3.158.1; Wed, 15 Apr 2015 20:30:12 +0800 From: Ding Tianhong To: , , Subject: [PATCH net-next 3/6] net: hip04: Solve the problem of the skb memory allocation failure Date: Wed, 15 Apr 2015 20:30:05 +0800 Message-ID: <1429101008-9464-4-git-send-email-dingtianhong@huawei.com> X-Mailer: git-send-email 1.8.5.2.msysgit.0 In-Reply-To: <1429101008-9464-1-git-send-email-dingtianhong@huawei.com> References: <1429101008-9464-1-git-send-email-dingtianhong@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.22.246] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150415_053224_432986_20CC7A86 X-CRM114-Status: GOOD ( 17.99 ) X-Spam-Score: -0.0 (/) Cc: devicetree@vger.kernel.org, linux@arm.linux.org.uk, sergei.shtylyov@cogentembedded.com, eric.dumazet@gmail.com, netdev@vger.kernel.org, joe@perches.com, zhangfei.gao@linaro.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The driver will alloc some skb buffer for hareware queue, but without considering the case of memory allocation failure, when memory is low, the skb may be null and panic the system, so break the loop when skb is null and try to alloc the memory again to fix this problem. Signed-off-by: Ding Tianhong Cc: "David S. Miller" Cc: Eric Dumazet Cc: Arnd Bergmann Cc: Zhangfei Gao Cc: Dan Carpenter Cc: Joe Perches --- drivers/net/ethernet/hisilicon/hip04_eth.c | 68 ++++++++++++++++++++++++------ 1 file changed, 54 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c index 6473462..7533858 100644 --- a/drivers/net/ethernet/hisilicon/hip04_eth.c +++ b/drivers/net/ethernet/hisilicon/hip04_eth.c @@ -131,6 +131,8 @@ #define HIP04_MAX_TX_COALESCE_FRAMES (TX_DESC_NUM - 1) #define HIP04_MIN_TX_COALESCE_FRAMES 1 +#define HIP04_RX_BUFFER_WRITE 16 + struct tx_desc { __be32 send_addr; __be32 send_size; @@ -180,6 +182,7 @@ struct hip04_priv { /* written only by tx cleanup */ unsigned int tx_tail ____cacheline_aligned_in_smp; + unsigned int rx_tail ____cacheline_aligned_in_smp; }; static inline unsigned int tx_count(unsigned int head, unsigned int tail) @@ -187,6 +190,11 @@ static inline unsigned int tx_count(unsigned int head, unsigned int tail) return (head - tail) % (TX_DESC_NUM - 1); } +static inline unsigned int rx_count(unsigned int head, unsigned int tail) +{ + return (head - tail) % (RX_DESC_NUM - 1); +} + static void hip04_config_port(struct net_device *ndev, u32 speed, u32 duplex) { struct hip04_priv *priv = netdev_priv(ndev); @@ -363,6 +371,35 @@ static int hip04_set_mac_address(struct net_device *ndev, void *addr) return 0; } +static int hip04_alloc_rx_buffers(struct net_device *ndev, int cleaned_count) +{ + struct hip04_priv *priv = netdev_priv(ndev); + unsigned char *buf; + dma_addr_t phys; + int i = priv->rx_tail; + + while (cleaned_count) { + buf = netdev_alloc_frag(priv->rx_buf_size); + if (!buf) + break; + + phys = dma_map_single(&ndev->dev, buf, + RX_BUF_SIZE, DMA_FROM_DEVICE); + if (dma_mapping_error(&ndev->dev, phys)) + break; + + priv->rx_buf[i] = buf; + priv->rx_phys[i] = phys; + hip04_set_recv_desc(priv, phys); + i = RX_NEXT(i); + cleaned_count--; + } + + priv->rx_tail = i; + + return 0; +} + static int hip04_tx_reclaim(struct net_device *ndev, bool force) { struct hip04_priv *priv = netdev_priv(ndev); @@ -482,8 +519,7 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget) struct sk_buff *skb; unsigned char *buf; bool last = false; - dma_addr_t phys; - int rx = 0; + int rx = 0, cleaned_count = 0; int tx_remaining; u16 len; u32 err; @@ -491,8 +527,10 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget) while (cnt && !last) { buf = priv->rx_buf[priv->rx_head]; skb = build_skb(buf, priv->rx_buf_size); - if (unlikely(!skb)) + if (unlikely(!skb)) { net_dbg_ratelimited("build_skb failed\n"); + goto done; + } dma_unmap_single(&ndev->dev, priv->rx_phys[priv->rx_head], RX_BUF_SIZE, DMA_FROM_DEVICE); @@ -519,18 +557,15 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget) rx++; } - buf = netdev_alloc_frag(priv->rx_buf_size); - if (!buf) - goto done; - phys = dma_map_single(&ndev->dev, buf, - RX_BUF_SIZE, DMA_FROM_DEVICE); - if (dma_mapping_error(&ndev->dev, phys)) - goto done; - priv->rx_buf[priv->rx_head] = buf; - priv->rx_phys[priv->rx_head] = phys; - hip04_set_recv_desc(priv, phys); - priv->rx_head = RX_NEXT(priv->rx_head); + + cleaned_count = rx_count(priv->rx_head, priv->rx_tail); + /* return some buffers to hardware , one at a time is too slow */ + if (++cleaned_count >= HIP04_RX_BUFFER_WRITE) { + hip04_alloc_rx_buffers(ndev, cleaned_count); + cleaned_count = 0; + } + if (rx >= budget) goto done; @@ -545,6 +580,10 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget) } napi_complete(napi); done: + cleaned_count = rx_count(priv->rx_head, priv->rx_tail); + if (cleaned_count) + hip04_alloc_rx_buffers(ndev, cleaned_count); + /* clean up tx descriptors and start a new timer if necessary */ tx_remaining = hip04_tx_reclaim(ndev, false); if (rx < budget && tx_remaining) @@ -621,6 +660,7 @@ static int hip04_mac_open(struct net_device *ndev) int i; priv->rx_head = 0; + priv->rx_tail = 0; priv->tx_head = 0; priv->tx_tail = 0; hip04_reset_ppe(priv);