From patchwork Tue Oct 29 12:17:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13854853 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE8FD1DF960 for ; Tue, 29 Oct 2024 12:17:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730204260; cv=none; b=rQMNzdUIIzAcW/jgdyLquUi36oBsagpHgqlL/YAq6dlqs2NfZzuowpEuXDoX7DZy1kATTes3cef72Mj7AyYfrakZYsjzo54y5D9NgHOsvNIZYl4rkoNUAm33MvZ7yREf7ESc6tZkU8RLKUKKNgW2LzMxYsjdqwnKszBskEG340g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730204260; c=relaxed/simple; bh=/LUSzupHcKWrdrfuDausc+liN4esxpnvqcwq3WMS2II=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Y1mdwS9MezY9HFt+5Gt6mMyrMM0h5a6tEm568hTgXK0k6SbfdWeuNDReZZARZOt0mBDiYG29grvP2qdxqInjojQbwKoqjdkD7CBe1Q6/efY9Ec6yKRju8s5f2C9BF0EbihERMaeCJwHqcHnnyxK3c5/Z5mk8ieBoAxNQMiWn5i8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WLeVeK+M; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WLeVeK+M" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 735BAC4CEE3; Tue, 29 Oct 2024 12:17:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730204259; bh=/LUSzupHcKWrdrfuDausc+liN4esxpnvqcwq3WMS2II=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=WLeVeK+M273yJDTn4AI99XG7gGJ6pRCpJSFtfw2UFEs5WXvyE2fMSUP1SCTOqu52z Vhy4yZ+vrjD+EY9GjW25n72BbbQGemvpDTTAPowEYDv5+HPdnxM7upEGncuuiBgH66 IkulVWJ7nkBoCJlm5N0TrRduHPJdT4x8WwHFqq3rbvVFHJZzTp0W4dsbGFWGVZ9N6d D3ynPfT6+VuzyN+pXrSIekMUvD5WhkvnG3UkMjRr0GVlAINSrZpkwlK2K1g7toQNaf fOc1LwkwpNQb3gzCwQ/5cKWNYKWrLAgXYx15Ct4eQdvaYCb0MmWnzrN+/xYq03XUFr q/gF9+OAykjiA== From: Lorenzo Bianconi Date: Tue, 29 Oct 2024 13:17:09 +0100 Subject: [PATCH net-next 1/2] net: airoha: Read completion queue data in airoha_qdma_tx_napi_poll() Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241029-airoha-en7581-tx-napi-work-v1-1-96ad1686b946@kernel.org> References: <20241029-airoha-en7581-tx-napi-work-v1-0-96ad1686b946@kernel.org> In-Reply-To: <20241029-airoha-en7581-tx-napi-work-v1-0-96ad1686b946@kernel.org> To: Felix Fietkau , Sean Wang , Mark Lee , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger , AngeloGioacchino Del Regno Cc: linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, netdev@vger.kernel.org, Lorenzo Bianconi X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org In order to avoid any possible race, read completion queue head and pending entry in airoha_qdma_tx_napi_poll routine instead of doing it in airoha_irq_handler. Remove unused airoha_tx_irq_queue unused fields. This is a preliminary patch to add Qdisc offload for airoha_eth driver. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/airoha_eth.c | 31 +++++++++++++----------------- 1 file changed, 13 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c index f463a505f5babed3e8e53bf62c92290fc94b3525..6cd8901ed38f0640a8a8f72174c120668b364045 100644 --- a/drivers/net/ethernet/mediatek/airoha_eth.c +++ b/drivers/net/ethernet/mediatek/airoha_eth.c @@ -752,11 +752,9 @@ struct airoha_tx_irq_queue { struct airoha_qdma *qdma; struct napi_struct napi; - u32 *q; int size; - int queued; - u16 head; + u32 *q; }; struct airoha_hw_stats { @@ -1656,25 +1654,31 @@ static int airoha_qdma_init_rx(struct airoha_qdma *qdma) static int airoha_qdma_tx_napi_poll(struct napi_struct *napi, int budget) { struct airoha_tx_irq_queue *irq_q; + int id, done = 0, irq_queued; struct airoha_qdma *qdma; struct airoha_eth *eth; - int id, done = 0; + u32 status, head; irq_q = container_of(napi, struct airoha_tx_irq_queue, napi); qdma = irq_q->qdma; id = irq_q - &qdma->q_tx_irq[0]; eth = qdma->eth; - while (irq_q->queued > 0 && done < budget) { - u32 qid, last, val = irq_q->q[irq_q->head]; + status = airoha_qdma_rr(qdma, REG_IRQ_STATUS(id)); + head = FIELD_GET(IRQ_HEAD_IDX_MASK, status); + head = head % irq_q->size; + irq_queued = FIELD_GET(IRQ_ENTRY_LEN_MASK, status); + + while (irq_queued > 0 && done < budget) { + u32 qid, last, val = irq_q->q[head]; struct airoha_queue *q; if (val == 0xff) break; - irq_q->q[irq_q->head] = 0xff; /* mark as done */ - irq_q->head = (irq_q->head + 1) % irq_q->size; - irq_q->queued--; + irq_q->q[head] = 0xff; /* mark as done */ + head = (head + 1) % irq_q->size; + irq_queued--; done++; last = FIELD_GET(IRQ_DESC_IDX_MASK, val); @@ -2026,20 +2030,11 @@ static irqreturn_t airoha_irq_handler(int irq, void *dev_instance) if (intr[0] & INT_TX_MASK) { for (i = 0; i < ARRAY_SIZE(qdma->q_tx_irq); i++) { - struct airoha_tx_irq_queue *irq_q = &qdma->q_tx_irq[i]; - u32 status, head; - if (!(intr[0] & TX_DONE_INT_MASK(i))) continue; airoha_qdma_irq_disable(qdma, QDMA_INT_REG_IDX0, TX_DONE_INT_MASK(i)); - - status = airoha_qdma_rr(qdma, REG_IRQ_STATUS(i)); - head = FIELD_GET(IRQ_HEAD_IDX_MASK, status); - irq_q->head = head % irq_q->size; - irq_q->queued = FIELD_GET(IRQ_ENTRY_LEN_MASK, status); - napi_schedule(&qdma->q_tx_irq[i].napi); } } From patchwork Tue Oct 29 12:17:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Bianconi X-Patchwork-Id: 13854854 X-Patchwork-Delegate: kuba@kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA6BC1DF960 for ; Tue, 29 Oct 2024 12:17:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730204262; cv=none; b=mWNysuwlIwZorpyQveQtol6CDjseTM+U7ooG25vP9jfyojj6CrCXC3pPdq4IUGvohFyI82GQg+NzF8fU36YCx5mzv0D/yj0TzBIzHo9v2+AZlSDKtBMfqxp0QnnauSr7EHnq0G07/w3IaoQdDzFk4jqHIatQ/srcX2B1Xlpsbjo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730204262; c=relaxed/simple; bh=tDkCaSGHNwLmvBZ1sIOYlpYZET0YJNWlUhpzReHjttw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=q4eKOJuoIAUVo83ZJ7Y+Q0sMagCeOQY1v4MRXUTOd/m9lYXfdgkVlFcSfXaoH7N3BLOYbAsr4OGPN87PmGd2WJyZAAfkq1r0X6N613UVurBroSNOcSLYnr60mKPr8DPUK4StpIS44TpJ2blvUoeeAoBj+iC7mjcKlqHn73I55LA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Ce2LSR3S; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Ce2LSR3S" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4FA1DC4CECD; Tue, 29 Oct 2024 12:17:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730204262; bh=tDkCaSGHNwLmvBZ1sIOYlpYZET0YJNWlUhpzReHjttw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=Ce2LSR3SW+LRk8THt1OeL4bpr+mNwqDl0mPzdmsISqGfIPtUBe83bur1Ygs9dJ35w gx7NWDaEaKeESZzZkQ6ceNbW8LH4F7CM8EVuuvqXEg8mYx56v7TyktNXWbNA+RNJD+ UCbD7hvACBiZs/27X7JKPOnKPmlK4Fo4HWGoMmV88DbK0vkfGhLU6XmMIjh9ugU1Y2 UOjHMRMTrHnj0cK8pmjYH+jO2UK+iSQRor3dhhpeG7TRcLCwIeXsSUEL2bcHOZlIMB JidWfF7VevZwn89oYrl0wFKIeuWx740m7iSZOUlq0c7Bjyd3SSfuMUc3dgqfkCUIWH CcjSTnP6TlpWw== From: Lorenzo Bianconi Date: Tue, 29 Oct 2024 13:17:10 +0100 Subject: [PATCH net-next 2/2] net: airoha: Simplify Tx napi logic Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241029-airoha-en7581-tx-napi-work-v1-2-96ad1686b946@kernel.org> References: <20241029-airoha-en7581-tx-napi-work-v1-0-96ad1686b946@kernel.org> In-Reply-To: <20241029-airoha-en7581-tx-napi-work-v1-0-96ad1686b946@kernel.org> To: Felix Fietkau , Sean Wang , Mark Lee , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Matthias Brugger , AngeloGioacchino Del Regno Cc: linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, netdev@vger.kernel.org, Lorenzo Bianconi X-Mailer: b4 0.14.2 X-Patchwork-Delegate: kuba@kernel.org Simplify Tx napi logic relying just on the packet index provided by completion queue indicating the completed packet that can be removed from the Tx DMA ring. This is a preliminary patch to add Qdisc offload for airoha_eth driver. Signed-off-by: Lorenzo Bianconi --- drivers/net/ethernet/mediatek/airoha_eth.c | 73 +++++++++++++++++------------- 1 file changed, 41 insertions(+), 32 deletions(-) diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c index 6cd8901ed38f0640a8a8f72174c120668b364045..6c683a12d5aa52dd9d966df123509075a989c0b3 100644 --- a/drivers/net/ethernet/mediatek/airoha_eth.c +++ b/drivers/net/ethernet/mediatek/airoha_eth.c @@ -1670,8 +1670,12 @@ static int airoha_qdma_tx_napi_poll(struct napi_struct *napi, int budget) irq_queued = FIELD_GET(IRQ_ENTRY_LEN_MASK, status); while (irq_queued > 0 && done < budget) { - u32 qid, last, val = irq_q->q[head]; + u32 qid, val = irq_q->q[head]; + struct airoha_qdma_desc *desc; + struct airoha_queue_entry *e; struct airoha_queue *q; + u32 index, desc_ctrl; + struct sk_buff *skb; if (val == 0xff) break; @@ -1681,9 +1685,7 @@ static int airoha_qdma_tx_napi_poll(struct napi_struct *napi, int budget) irq_queued--; done++; - last = FIELD_GET(IRQ_DESC_IDX_MASK, val); qid = FIELD_GET(IRQ_RING_IDX_MASK, val); - if (qid >= ARRAY_SIZE(qdma->q_tx)) continue; @@ -1691,46 +1693,53 @@ static int airoha_qdma_tx_napi_poll(struct napi_struct *napi, int budget) if (!q->ndesc) continue; + index = FIELD_GET(IRQ_DESC_IDX_MASK, val); + if (index >= q->ndesc) + continue; + spin_lock_bh(&q->lock); - while (q->queued > 0) { - struct airoha_qdma_desc *desc = &q->desc[q->tail]; - struct airoha_queue_entry *e = &q->entry[q->tail]; - u32 desc_ctrl = le32_to_cpu(desc->ctrl); - struct sk_buff *skb = e->skb; - u16 index = q->tail; + if (!q->queued) + goto unlock; - if (!(desc_ctrl & QDMA_DESC_DONE_MASK) && - !(desc_ctrl & QDMA_DESC_DROP_MASK)) - break; + desc = &q->desc[index]; + desc_ctrl = le32_to_cpu(desc->ctrl); - q->tail = (q->tail + 1) % q->ndesc; - q->queued--; + if (!(desc_ctrl & QDMA_DESC_DONE_MASK) && + !(desc_ctrl & QDMA_DESC_DROP_MASK)) + goto unlock; - dma_unmap_single(eth->dev, e->dma_addr, e->dma_len, - DMA_TO_DEVICE); + e = &q->entry[index]; + skb = e->skb; - WRITE_ONCE(desc->msg0, 0); - WRITE_ONCE(desc->msg1, 0); + dma_unmap_single(eth->dev, e->dma_addr, e->dma_len, + DMA_TO_DEVICE); + memset(e, 0, sizeof(*e)); + WRITE_ONCE(desc->msg0, 0); + WRITE_ONCE(desc->msg1, 0); + q->queued--; - if (skb) { - u16 queue = skb_get_queue_mapping(skb); - struct netdev_queue *txq; + /* completion ring can report out-of-order indexes if hw QoS + * is enabled and packets with different priority are queued + * to same DMA ring. Take into account possible out-of-order + * reports incrementing DMA ring tail pointer + */ + while (q->tail != q->head && !q->entry[q->tail].dma_addr) + q->tail = (q->tail + 1) % q->ndesc; - txq = netdev_get_tx_queue(skb->dev, queue); - netdev_tx_completed_queue(txq, 1, skb->len); - if (netif_tx_queue_stopped(txq) && - q->ndesc - q->queued >= q->free_thr) - netif_tx_wake_queue(txq); + if (skb) { + u16 queue = skb_get_queue_mapping(skb); + struct netdev_queue *txq; - dev_kfree_skb_any(skb); - e->skb = NULL; - } + txq = netdev_get_tx_queue(skb->dev, queue); + netdev_tx_completed_queue(txq, 1, skb->len); + if (netif_tx_queue_stopped(txq) && + q->ndesc - q->queued >= q->free_thr) + netif_tx_wake_queue(txq); - if (index == last) - break; + dev_kfree_skb_any(skb); } - +unlock: spin_unlock_bh(&q->lock); }