From patchwork Fri Aug 25 22:23:30 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Iyappan Subramanian X-Patchwork-Id: 9923029 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5575160349 for ; Fri, 25 Aug 2017 22:25:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4845428539 for ; Fri, 25 Aug 2017 22:25:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3CDAE28543; Fri, 25 Aug 2017 22:25:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 876282853F for ; Fri, 25 Aug 2017 22:25:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=sIqvZlrMIMkSWn7xi4iET6jImXd2xuKxrTzfnEWjYrk=; b=ea9u6FvgBqY2xKV0yWl0C++COG PMbMcTZM+dn6RifNsTFnslurssBYqadusNJoHQmQGjFcLAiTbJiAzkQn9FPfNieddKBNUMzFM2cob bZfk7h6STmp47GgZfSR7+mnCHIMBkizPzoGgdwKb24++7Xk4Eysxw69gkx3jD9loWgDJJyJ3oUXjw 0PP+7XnbRtH121t8Oc0EjKO/X2LiMbqAO8M2Dvd+eSr4v7nc6ya7bGDbuW5v/8+DsaT+OEHe8V5Yu 2IhHqB5iY6Li/3lpPW2/9A4aR2NMj17g0kvTCuJEZBOd5Bn2EBqYhAOR3j5sPu5oCz6PhDqoJJTsG UvMTtQOw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dlN1j-0002rg-8C; Fri, 25 Aug 2017 22:24:31 +0000 Received: from mail-pg0-x22e.google.com ([2607:f8b0:400e:c05::22e]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dlN1Q-0002Mx-56 for linux-arm-kernel@lists.infradead.org; Fri, 25 Aug 2017 22:24:14 +0000 Received: by mail-pg0-x22e.google.com with SMTP id a7so5780649pgn.1 for ; Fri, 25 Aug 2017 15:23:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=apm.com; s=apm; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mtF2T8a0fjnWX6sNOENsoKt5WUt+PwmlWUHFcxlIKyU=; b=Vue2DtTDVjeMgpi+qV79tTEa9moKGp25HDx3N8yPgXQ5aQkO6+3VYloqDphPvxNpGu MFbo/67g93xB0iop5cG7B+r/6e0nVc+Z71z9Ewjs/8vlux3O6lOmhACDk8Ym9is7+ODc wxnCUOcB4Vznguv7ekHeIAD9qSbNQKcG6RyMs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mtF2T8a0fjnWX6sNOENsoKt5WUt+PwmlWUHFcxlIKyU=; b=Iqvpx7u49yX8hHtDNexcI5f21bJ5DMn0p21b4N3iDx3K8MR6O4gTuAcopevDekKw+v 3bWHumjdh8jzIT2bu24iVz/Ko2K/dNmJ7gl5HTece3UgsbI4W9ugg9UYHxofXwVzwF1q 4SbAd1AQcSAm+KqlJhcl7YMisluY6+vvTBZmsoKDABl5q33sAnDCotO8QB8sQGwG/e0F lfyUqfqin69lMBX1d13EKmG+Xw602MUlQBkznZA5klwOOxRavct2HopKh4b3Iks2jjTW mEqartKLVow9GaoSulZ65/LhllLEjzD/Qo4ZKhojYEM7eMAiH772QefjAeaWXZI7ErCG f/0Q== X-Gm-Message-State: AHYfb5hnpvBqSWaRFG+Hbcww2+Ca+1b2lkIFrSXgoXDBpmx7riFOywZK NFjaioCTdJ5vS7Us X-Received: by 10.98.71.88 with SMTP id u85mr11358275pfa.185.1503699831218; Fri, 25 Aug 2017 15:23:51 -0700 (PDT) Received: from isubrama-dev.amcc.com ([206.80.4.98]) by smtp.gmail.com with ESMTPSA id c80sm13889519pfj.26.2017.08.25.15.23.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 25 Aug 2017 15:23:49 -0700 (PDT) From: Iyappan Subramanian To: davem@davemloft.net, netdev@vger.kernel.org Subject: [PATCH 2/2] drivers: net: xgene: Clean up all outstanding tx descriptors Date: Fri, 25 Aug 2017 15:23:30 -0700 Message-Id: <1503699810-12803-3-git-send-email-isubramanian@apm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1503699810-12803-1-git-send-email-isubramanian@apm.com> References: <1503699810-12803-1-git-send-email-isubramanian@apm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170825_152412_232625_4A145EE1 X-CRM114-Status: GOOD ( 17.02 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qnguyen@apm.com, dnelson@redhat.com, patches@apm.com, linux-arm-kernel@lists.infradead.org, Iyappan Subramanian MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When xgene_enet is rmmod'd and there are still outstanding tx descriptors that have been setup but have not completed, it is possible on the next modprobe of the driver to receive the oldest of such tx descriptors. This results in a kernel NULL pointer dereference. This patch attempts to clean up (by tearing down) all outstanding tx descriptors when the xgene_enet driver is being rmmod'd. Given that, on the next modprobe it should be safe to ignore any such tx descriptors received that map to a NULL skb pointer. Additionally this patch removes redundant call to dev_kfree_skb_any() from xgene_enet_setup_tx_desc(). The only caller of xgene_enet_setup_tx_desc() will call dev_kfree_skb_any() upon return of an error. Nothing is gained by calling it twice in a row. Signed-off-by: Iyappan Subramanian Signed-off-by: Dean Nelson Tested-by: Quan Nguyen --- drivers/net/ethernet/apm/xgene/xgene_enet_main.c | 120 +++++++++++++++++------ 1 file changed, 89 insertions(+), 31 deletions(-) diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c index 6e253d9..76e2903 100644 --- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c +++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c @@ -237,22 +237,24 @@ static irqreturn_t xgene_enet_rx_irq(const int irq, void *data) return IRQ_HANDLED; } -static int xgene_enet_tx_completion(struct xgene_enet_desc_ring *cp_ring, - struct xgene_enet_raw_desc *raw_desc) +static dma_addr_t *xgene_get_frag_dma_array(struct xgene_enet_desc_ring *ring, + u16 skb_index) { - struct xgene_enet_pdata *pdata = netdev_priv(cp_ring->ndev); - struct sk_buff *skb; + return &ring->frag_dma_addr[skb_index * MAX_SKB_FRAGS]; +} + +static void xgene_enet_teardown_tx_desc(struct xgene_enet_desc_ring *cp_ring, + struct xgene_enet_raw_desc *raw_desc, + struct xgene_enet_raw_desc *exp_desc, + struct sk_buff *skb, + u16 skb_index) +{ + dma_addr_t dma_addr, *frag_dma_addr; struct device *dev; skb_frag_t *frag; - dma_addr_t *frag_dma_addr; - u16 skb_index; - u8 mss_index; - u8 status; int i; - skb_index = GET_VAL(USERINFO, le64_to_cpu(raw_desc->m0)); - skb = cp_ring->cp_skb[skb_index]; - frag_dma_addr = &cp_ring->frag_dma_addr[skb_index * MAX_SKB_FRAGS]; + frag_dma_addr = xgene_get_frag_dma_array(cp_ring, skb_index); dev = ndev_to_dev(cp_ring->ndev); dma_unmap_single(dev, GET_VAL(DATAADDR, le64_to_cpu(raw_desc->m1)), @@ -265,6 +267,36 @@ static int xgene_enet_tx_completion(struct xgene_enet_desc_ring *cp_ring, DMA_TO_DEVICE); } + if (exp_desc && GET_VAL(LL_BYTES_LSB, le64_to_cpu(raw_desc->m2))) { + dma_addr = GET_VAL(DATAADDR, le64_to_cpu(exp_desc->m2)); + dma_unmap_single(dev, dma_addr, sizeof(u64) * MAX_EXP_BUFFS, + DMA_TO_DEVICE); + } + + dev_kfree_skb_any(skb); +} + +static int xgene_enet_tx_completion(struct xgene_enet_desc_ring *cp_ring, + struct xgene_enet_raw_desc *raw_desc, + struct xgene_enet_raw_desc *exp_desc) +{ + struct xgene_enet_pdata *pdata = netdev_priv(cp_ring->ndev); + struct sk_buff *skb; + u16 skb_index; + u8 status; + u8 mss_index; + + skb_index = GET_VAL(USERINFO, le64_to_cpu(raw_desc->m0)); + skb = cp_ring->cp_skb[skb_index]; + if (unlikely(!skb)) { + netdev_err(cp_ring->ndev, "completion skb is NULL\n"); + return -EIO; + } + cp_ring->cp_skb[skb_index] = NULL; + + xgene_enet_teardown_tx_desc(cp_ring, raw_desc, exp_desc, skb, + skb_index); + if (GET_BIT(ET, le64_to_cpu(raw_desc->m3))) { mss_index = GET_VAL(MSS, le64_to_cpu(raw_desc->m3)); spin_lock(&pdata->mss_lock); @@ -279,12 +311,6 @@ static int xgene_enet_tx_completion(struct xgene_enet_desc_ring *cp_ring, cp_ring->tx_errors++; } - if (likely(skb)) { - dev_kfree_skb_any(skb); - } else { - netdev_err(cp_ring->ndev, "completion skb is NULL\n"); - } - return 0; } @@ -412,11 +438,6 @@ static __le64 *xgene_enet_get_exp_bufs(struct xgene_enet_desc_ring *ring) return exp_bufs; } -static dma_addr_t *xgene_get_frag_dma_array(struct xgene_enet_desc_ring *ring) -{ - return &ring->cp_ring->frag_dma_addr[ring->tail * MAX_SKB_FRAGS]; -} - static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring, struct sk_buff *skb) { @@ -473,7 +494,8 @@ static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring, for (i = nr_frags; i < 4 ; i++) exp_desc[i ^ 1] = cpu_to_le64(LAST_BUFFER); - frag_dma_addr = xgene_get_frag_dma_array(tx_ring); + frag_dma_addr = xgene_get_frag_dma_array(tx_ring->cp_ring, + tx_ring->tail); for (i = 0, fidx = 0; split || (fidx < nr_frags); i++) { if (!split) { @@ -484,7 +506,7 @@ static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring, pbuf_addr = skb_frag_dma_map(dev, frag, 0, size, DMA_TO_DEVICE); if (dma_mapping_error(dev, pbuf_addr)) - return -EINVAL; + goto err; frag_dma_addr[fidx] = pbuf_addr; fidx++; @@ -539,10 +561,9 @@ static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring, dma_addr = dma_map_single(dev, exp_bufs, sizeof(u64) * MAX_EXP_BUFFS, DMA_TO_DEVICE); - if (dma_mapping_error(dev, dma_addr)) { - dev_kfree_skb_any(skb); - return -EINVAL; - } + if (dma_mapping_error(dev, dma_addr)) + goto err; + i = ell_bytes >> LL_BYTES_LSB_LEN; exp_desc[2] = cpu_to_le64(SET_VAL(DATAADDR, dma_addr) | SET_VAL(LL_BYTES_MSB, i) | @@ -558,6 +579,19 @@ static int xgene_enet_setup_tx_desc(struct xgene_enet_desc_ring *tx_ring, tx_ring->tail = tail; return count; + +err: + dma_unmap_single(dev, GET_VAL(DATAADDR, le64_to_cpu(raw_desc->m1)), + skb_headlen(skb), + DMA_TO_DEVICE); + + for (i = 0; i < fidx; i++) { + frag = &skb_shinfo(skb)->frags[i]; + dma_unmap_page(dev, frag_dma_addr[i], skb_frag_size(frag), + DMA_TO_DEVICE); + } + + return -EINVAL; } static netdev_tx_t xgene_enet_start_xmit(struct sk_buff *skb, @@ -828,7 +862,8 @@ static int xgene_enet_process_ring(struct xgene_enet_desc_ring *ring, if (is_rx_desc(raw_desc)) { ret = xgene_enet_rx_frame(ring, raw_desc, exp_desc); } else { - ret = xgene_enet_tx_completion(ring, raw_desc); + ret = xgene_enet_tx_completion(ring, raw_desc, + exp_desc); is_completion = true; } xgene_enet_mark_desc_slot_empty(raw_desc); @@ -1071,18 +1106,41 @@ static void xgene_enet_delete_desc_rings(struct xgene_enet_pdata *pdata) { struct xgene_enet_desc_ring *buf_pool, *page_pool; struct xgene_enet_desc_ring *ring; - int i; + struct xgene_enet_raw_desc *raw_desc, *exp_desc; + struct sk_buff *skb; + int i, j, k; for (i = 0; i < pdata->txq_cnt; i++) { ring = pdata->tx_ring[i]; if (ring) { + /* + * Find any tx descriptors that were setup but never + * completed, and teardown the setup. + */ + for (j = 0; j < ring->slots; j++) { + skb = ring->cp_ring->cp_skb[j]; + if (likely(!skb)) + continue; + + raw_desc = &ring->raw_desc[j]; + exp_desc = NULL; + if (GET_BIT(NV, le64_to_cpu(raw_desc->m0))) { + k = (j + 1) & (ring->slots - 1); + exp_desc = &ring->raw_desc[k]; + } + + xgene_enet_teardown_tx_desc(ring->cp_ring, + raw_desc, exp_desc, + skb, j); + } + xgene_enet_delete_ring(ring); pdata->port_ops->clear(pdata, ring); + if (pdata->cq_cnt) xgene_enet_delete_ring(ring->cp_ring); pdata->tx_ring[i] = NULL; } - } for (i = 0; i < pdata->rxq_cnt; i++) {