From patchwork Thu May 2 09:18:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Justin Lai X-Patchwork-Id: 13651489 X-Patchwork-Delegate: kuba@kernel.org Received: from rtits2.realtek.com.tw (rtits2.realtek.com [211.75.126.72]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 939DE537F6; Thu, 2 May 2024 09:21:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=211.75.126.72 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714641699; cv=none; b=K+pt4S4SBUk60I3NDSNeOnkWxOENhRXQetJBu3Sz7myRJ4cOfuNwD03bFgECL8BeefuLR+KUOfWEbd1bgFQ7Nz6YcRartSse9h/o4nWZNISGvWPOgHBCgUMJoh7Ejfe+0b/YswBA+hSq19WTImWI+wFFlGKad6mLCrIzNlQy1gY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714641699; c=relaxed/simple; bh=03uO+GSLM6SJPsuvWSUN0vVbWj5B2R1BlN7pjYozWB8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=s93M4BnX28xvCXEnW3VJqgveZ/929aceBovqcMCzkqKWJ3UVKqzPxMmnTc1yk0yvCIKKgz2EccD5OE5Wy5EVRMPraaAG61RroqrApB4XDglH9x9qyquwiMt1SpNtES9B7tVBsrJenDiMYfMnmhKrghy/8tQeiccKplX7TnKUAZ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=realtek.com; spf=pass smtp.mailfrom=realtek.com; arc=none smtp.client-ip=211.75.126.72 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=realtek.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=realtek.com X-SpamFilter-By: ArmorX SpamTrap 5.78 with qID 4429LIIp82331618, This message is accepted by code: ctloc85258 Received: from mail.realtek.com (rtexh36505.realtek.com.tw[172.21.6.25]) by rtits2.realtek.com.tw (8.15.2/2.95/5.92) with ESMTPS id 4429LIIp82331618 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 2 May 2024 17:21:18 +0800 Received: from RTEXMBS04.realtek.com.tw (172.21.6.97) by RTEXH36505.realtek.com.tw (172.21.6.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 2 May 2024 17:21:19 +0800 Received: from RTDOMAIN (172.21.210.160) by RTEXMBS04.realtek.com.tw (172.21.6.97) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 2 May 2024 17:21:18 +0800 From: Justin Lai To: CC: , , , , , , , , , "Justin Lai" Subject: [PATCH net-next v17 07/13] rtase: Implement a function to receive packets Date: Thu, 2 May 2024 17:18:41 +0800 Message-ID: <20240502091847.65181-8-justinlai0215@realtek.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240502091847.65181-1-justinlai0215@realtek.com> References: <20240502091847.65181-1-justinlai0215@realtek.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: RTEXH36506.realtek.com.tw (172.21.6.27) To RTEXMBS04.realtek.com.tw (172.21.6.97) X-KSE-ServerInfo: RTEXH36505.realtek.com.tw, 9 X-KSE-AntiSpam-Interceptor-Info: fallback X-KSE-Antivirus-Interceptor-Info: fallback X-KSE-AntiSpam-Interceptor-Info: fallback X-Patchwork-Delegate: kuba@kernel.org Implement rx_handler to read the information of the rx descriptor, thereby checking the packet accordingly and storing the packet in the socket buffer to complete the reception of the packet. Signed-off-by: Justin Lai --- .../net/ethernet/realtek/rtase/rtase_main.c | 146 ++++++++++++++++++ 1 file changed, 146 insertions(+) diff --git a/drivers/net/ethernet/realtek/rtase/rtase_main.c b/drivers/net/ethernet/realtek/rtase/rtase_main.c index 497b8ba8ff64..42c2b13516e9 100644 --- a/drivers/net/ethernet/realtek/rtase/rtase_main.c +++ b/drivers/net/ethernet/realtek/rtase/rtase_main.c @@ -453,6 +453,152 @@ static void rtase_rx_ring_clear(struct rtase_ring *ring) } } +static int rtase_fragmented_frame(u32 status) +{ + return (status & (RTASE_RX_FIRST_FRAG | RTASE_RX_LAST_FRAG)) != + (RTASE_RX_FIRST_FRAG | RTASE_RX_LAST_FRAG); +} + +static void rtase_rx_csum(const struct rtase_private *tp, struct sk_buff *skb, + const union rtase_rx_desc *desc) +{ + u32 opts2 = le32_to_cpu(desc->desc_status.opts2); + + /* rx csum offload */ + if (((opts2 & RTASE_RX_V4F) && !(opts2 & RTASE_RX_IPF)) || + (opts2 & RTASE_RX_V6F)) { + if (((opts2 & RTASE_RX_TCPT) && !(opts2 & RTASE_RX_TCPF)) || + ((opts2 & RTASE_RX_UDPT) && !(opts2 & RTASE_RX_UDPF))) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + } else { + skb->ip_summed = CHECKSUM_NONE; + } + } else { + skb->ip_summed = CHECKSUM_NONE; + } +} + +static void rtase_rx_vlan_skb(union rtase_rx_desc *desc, struct sk_buff *skb) +{ + u32 opts2 = le32_to_cpu(desc->desc_status.opts2); + + if (!(opts2 & RTASE_RX_VLAN_TAG)) + return; + + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), + swab16(opts2 & RTASE_VLAN_TAG_MASK)); +} + +static void rtase_rx_skb(const struct rtase_ring *ring, struct sk_buff *skb) +{ + struct rtase_int_vector *ivec = ring->ivec; + + napi_gro_receive(&ivec->napi, skb); +} + +static int rx_handler(struct rtase_ring *ring, int budget) +{ + const struct rtase_private *tp = ring->ivec->tp; + u32 pkt_size, cur_rx, delta, entry, status; + union rtase_rx_desc *desc_base = ring->desc; + struct net_device *dev = tp->dev; + struct sk_buff *skb; + union rtase_rx_desc *desc; + int workdone = 0; + + cur_rx = ring->cur_idx; + entry = cur_rx % RTASE_NUM_DESC; + desc = &desc_base[entry]; + + do { + /* make sure discriptor has been updated */ + rmb(); + status = le32_to_cpu(desc->desc_status.opts1); + + if (status & RTASE_DESC_OWN) + break; + + if (unlikely(status & RTASE_RX_RES)) { + if (net_ratelimit()) + netdev_warn(dev, "Rx ERROR. status = %08x\n", + status); + + dev->stats.rx_errors++; + + if (status & (RTASE_RX_RWT | RTASE_RX_RUNT)) + dev->stats.rx_length_errors++; + + if (status & RTASE_RX_CRC) + dev->stats.rx_crc_errors++; + + if (dev->features & NETIF_F_RXALL) + goto process_pkt; + + rtase_mark_to_asic(desc, tp->rx_buf_sz); + goto skip_process_pkt; + } + +process_pkt: + pkt_size = status & RTASE_RX_PKT_SIZE_MASK; + if (likely(!(dev->features & NETIF_F_RXFCS))) + pkt_size -= ETH_FCS_LEN; + + /* the driver does not support incoming fragmented + * frames. they are seen as a symptom of over-mtu + * sized frames + */ + if (unlikely(rtase_fragmented_frame(status))) { + dev->stats.rx_dropped++; + dev->stats.rx_length_errors++; + rtase_mark_to_asic(desc, tp->rx_buf_sz); + continue; + } + + skb = ring->skbuff[entry]; + dma_sync_single_for_cpu(&tp->pdev->dev, + ring->mis.data_phy_addr[entry], + tp->rx_buf_sz, DMA_FROM_DEVICE); + + ring->skbuff[entry] = NULL; + + if (dev->features & NETIF_F_RXCSUM) + rtase_rx_csum(tp, skb, desc); + + skb->dev = dev; + skb_put(skb, pkt_size); + skb_mark_for_recycle(skb); + skb->protocol = eth_type_trans(skb, dev); + + if (skb->pkt_type == PACKET_MULTICAST) + dev->stats.multicast++; + + rtase_rx_vlan_skb(desc, skb); + rtase_rx_skb(ring, skb); + + dev_sw_netstats_rx_add(dev, pkt_size); + +skip_process_pkt: + workdone++; + cur_rx++; + entry = cur_rx % RTASE_NUM_DESC; + desc = ring->desc + sizeof(union rtase_rx_desc) * entry; + prefetch(desc); + } while (workdone != budget); + + ring->cur_idx = cur_rx; + delta = rtase_rx_ring_fill(ring, ring->dirty_idx, ring->cur_idx, 1); + + if (!delta && workdone) + netdev_info(dev, "no Rx buffer allocated\n"); + + ring->dirty_idx += delta; + + if ((ring->dirty_idx + RTASE_NUM_DESC) == ring->cur_idx) + netdev_emerg(dev, "Rx buffers exhausted\n"); + + return workdone; +} + static void rtase_rx_desc_init(struct rtase_private *tp, u16 idx) { struct rtase_ring *ring = &tp->rx_ring[idx];