From patchwork Thu Feb 4 08:40:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 12066559 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED6D0C433DB for ; Thu, 4 Feb 2021 08:33:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BB37164F39 for ; Thu, 4 Feb 2021 08:33:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235007AbhBDIdF (ORCPT ); Thu, 4 Feb 2021 03:33:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234953AbhBDIdC (ORCPT ); Thu, 4 Feb 2021 03:33:02 -0500 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9CD4CC061573 for ; Thu, 4 Feb 2021 00:32:21 -0800 (PST) Received: by mail-wr1-x431.google.com with SMTP id g10so2479324wrx.1 for ; Thu, 04 Feb 2021 00:32:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=5ATuC3/xAbdujnqGOzixG91x6Lw0X76VRuGRM6RrgRo=; b=pg8xDZxmCO+6+B4ZWu2R2h7DUGOkcPicFllwCyyumv0NzOpepuvGXN/DN7LJtfKlTM 17CH/qdetHeoFa7sXYNk33edA8Iid91KPu9SDwSmYWGa440nNdRgwK+FhGn/d660pOW4 jdqBfyNE2p407ff8bP40aV+C0XrkLPG9/P0n9X+CA8M9h3TJwj8iPK2FjwOz8H7OibPZ 2yNPuFaYwj/+/zdNgkld+LObhU5MifXMwEY2ApfRKeg/NJPFUS49BUg9QHZElC/CCYSQ MyncV//4c1Wo2xK7ubhVrZ6yESyLxpQrF4/I2mDQ4Valu4eJpRvp8rni6ibivj5OtrFQ Bd0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=5ATuC3/xAbdujnqGOzixG91x6Lw0X76VRuGRM6RrgRo=; b=M/T4+3blTskDNy3BKNacOAeoujP5CAhHViTBNIhMI67gCmUoZt/MtXbjHu/pNGt93D U77p6vSXDvu3s9fICSRlduoa/Cf+tFFwg4zb3fjx6+oryZM5YedBc5wOI/ZI7r/io8+9 Ar9VD61Wfj/xJD/ikpwo9SpfSaE7yhwPW9w+2HmbOQJqHUar0RNbSoeU8X77rNWK5jTk 2ysfz7wyY2FTl3qt519Ghi8V5Pi2NMDbovqgAIm4kqbTr6MTmrF4MkgPz4eBpmqfPZrv hIVHLvkprlXfiPhQMyX01qnh/1RWE/5PiCArNe2uR3Pv8wsegaDcCfINVdrbH6C6ad2p 9mtg== X-Gm-Message-State: AOAM530l6QWgxXbYeM1FfsqSuIU9+iJdaoTXgnUwfL7lyGwXz2m5aS/Q tMmNewXqgVL/lUgKij+PljsmPw== X-Google-Smtp-Source: ABdhPJy7LF+FNQoU3bJ5vqp7cW3TJ3IyZAd7nrJP3jpqFhVN2m6bTTN0nHjuet+SGNy+5oVzcCySAg== X-Received: by 2002:a5d:4a50:: with SMTP id v16mr7964947wrs.241.1612427540331; Thu, 04 Feb 2021 00:32:20 -0800 (PST) Received: from localhost.localdomain ([88.122.66.28]) by smtp.gmail.com with ESMTPSA id x9sm5657238wmb.14.2021.02.04.00.32.19 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Feb 2021 00:32:19 -0800 (PST) From: Loic Poulain To: kuba@kernel.org, davem@davemloft.net Cc: willemdebruijn.kernel@gmail.com, netdev@vger.kernel.org, stranche@codeaurora.org, subashab@codeaurora.org, Loic Poulain Subject: [PATCH net-next v5 1/2] net: mhi-net: Add re-aggregation of fragmented packets Date: Thu, 4 Feb 2021 09:40:00 +0100 Message-Id: <1612428002-12333-1-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org When device side MTU is larger than host side MTU, the packets (typically rmnet packets) are split over multiple MHI transfers. In that case, fragments must be re-aggregated to recover the packet before forwarding to upper layer. A fragmented packet result in -EOVERFLOW MHI transaction status for each of its fragments, except the final one. Such transfer was previously considered as error and fragments were simply dropped. This change adds re-aggregation mechanism using skb chaining, via skb frag_list. A warning (once) is printed since this behavior usually comes from a misconfiguration of the device (e.g. modem MTU). Signed-off-by: Loic Poulain Acked-by: Jesse Brandeburg --- v2: use zero-copy skb chaining instead of skb_copy_expand. v3: Fix nit in commit msg + remove misleading inline comment for frag_list v4: no change v5: reword/fix commit subject drivers/net/mhi_net.c | 74 ++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 64 insertions(+), 10 deletions(-) diff --git a/drivers/net/mhi_net.c b/drivers/net/mhi_net.c index 4f512531..8800991 100644 --- a/drivers/net/mhi_net.c +++ b/drivers/net/mhi_net.c @@ -32,6 +32,8 @@ struct mhi_net_stats { struct mhi_net_dev { struct mhi_device *mdev; struct net_device *ndev; + struct sk_buff *skbagg_head; + struct sk_buff *skbagg_tail; struct delayed_work rx_refill; struct mhi_net_stats stats; u32 rx_queue_sz; @@ -132,6 +134,32 @@ static void mhi_net_setup(struct net_device *ndev) ndev->tx_queue_len = 1000; } +static struct sk_buff *mhi_net_skb_agg(struct mhi_net_dev *mhi_netdev, + struct sk_buff *skb) +{ + struct sk_buff *head = mhi_netdev->skbagg_head; + struct sk_buff *tail = mhi_netdev->skbagg_tail; + + /* This is non-paged skb chaining using frag_list */ + if (!head) { + mhi_netdev->skbagg_head = skb; + return skb; + } + + if (!skb_shinfo(head)->frag_list) + skb_shinfo(head)->frag_list = skb; + else + tail->next = skb; + + head->len += skb->len; + head->data_len += skb->len; + head->truesize += skb->truesize; + + mhi_netdev->skbagg_tail = skb; + + return mhi_netdev->skbagg_head; +} + static void mhi_net_dl_callback(struct mhi_device *mhi_dev, struct mhi_result *mhi_res) { @@ -142,19 +170,42 @@ static void mhi_net_dl_callback(struct mhi_device *mhi_dev, free_desc_count = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE); if (unlikely(mhi_res->transaction_status)) { - dev_kfree_skb_any(skb); - - /* MHI layer stopping/resetting the DL channel */ - if (mhi_res->transaction_status == -ENOTCONN) + switch (mhi_res->transaction_status) { + case -EOVERFLOW: + /* Packet can not fit in one MHI buffer and has been + * split over multiple MHI transfers, do re-aggregation. + * That usually means the device side MTU is larger than + * the host side MTU/MRU. Since this is not optimal, + * print a warning (once). + */ + netdev_warn_once(mhi_netdev->ndev, + "Fragmented packets received, fix MTU?\n"); + skb_put(skb, mhi_res->bytes_xferd); + mhi_net_skb_agg(mhi_netdev, skb); + break; + case -ENOTCONN: + /* MHI layer stopping/resetting the DL channel */ + dev_kfree_skb_any(skb); return; - - u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); - u64_stats_inc(&mhi_netdev->stats.rx_errors); - u64_stats_update_end(&mhi_netdev->stats.rx_syncp); + default: + /* Unknown error, simply drop */ + dev_kfree_skb_any(skb); + u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); + u64_stats_inc(&mhi_netdev->stats.rx_errors); + u64_stats_update_end(&mhi_netdev->stats.rx_syncp); + } } else { + skb_put(skb, mhi_res->bytes_xferd); + + if (mhi_netdev->skbagg_head) { + /* Aggregate the final fragment */ + skb = mhi_net_skb_agg(mhi_netdev, skb); + mhi_netdev->skbagg_head = NULL; + } + u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); u64_stats_inc(&mhi_netdev->stats.rx_packets); - u64_stats_add(&mhi_netdev->stats.rx_bytes, mhi_res->bytes_xferd); + u64_stats_add(&mhi_netdev->stats.rx_bytes, skb->len); u64_stats_update_end(&mhi_netdev->stats.rx_syncp); switch (skb->data[0] & 0xf0) { @@ -169,7 +220,6 @@ static void mhi_net_dl_callback(struct mhi_device *mhi_dev, break; } - skb_put(skb, mhi_res->bytes_xferd); netif_rx(skb); } @@ -267,6 +317,7 @@ static int mhi_net_probe(struct mhi_device *mhi_dev, dev_set_drvdata(dev, mhi_netdev); mhi_netdev->ndev = ndev; mhi_netdev->mdev = mhi_dev; + mhi_netdev->skbagg_head = NULL; SET_NETDEV_DEV(ndev, &mhi_dev->dev); SET_NETDEV_DEVTYPE(ndev, &wwan_type); @@ -301,6 +352,9 @@ static void mhi_net_remove(struct mhi_device *mhi_dev) mhi_unprepare_from_transfer(mhi_netdev->mdev); + if (mhi_netdev->skbagg_head) + kfree_skb(mhi_netdev->skbagg_head); + free_netdev(mhi_netdev->ndev); } From patchwork Thu Feb 4 08:40:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Loic Poulain X-Patchwork-Id: 12066561 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B7BCC433E6 for ; Thu, 4 Feb 2021 08:33:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53EA064F46 for ; Thu, 4 Feb 2021 08:33:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233305AbhBDIdL (ORCPT ); Thu, 4 Feb 2021 03:33:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234957AbhBDIdD (ORCPT ); Thu, 4 Feb 2021 03:33:03 -0500 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 894C1C0613D6 for ; Thu, 4 Feb 2021 00:32:23 -0800 (PST) Received: by mail-wr1-x42a.google.com with SMTP id b3so2443144wrj.5 for ; Thu, 04 Feb 2021 00:32:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=899GESZRsKNDKJUPjgkSTANdjTR6H3fnpcwp6mKfLQw=; b=T6IKQbl6kmygVHF5F8vTj/ve6Rdy/+J0NJKDQUaaKKQh1pU/CRF9Ff+xjb+2OiP7OO m5ZfDwSlPA8meqvZcG+x3R26u6iK/n14T5S7fvV6YOSlIf9GpWBpPOfPAZ/u+1NTo7lJ 1bWdyhCpyEri3HIfxKtSTqIcU6j6vxPDg90zTe2GjEPNXHy2rAwpxxZpfq5Y0/kyZQy5 xYy59Dg2QAA4+OTNlOe2SLObn08DVTmTYTvHGhVPEnGdTsGZZuNZR+9YtAZorboNMkax d92/fFXbk73M+sTAxJl53l/Tf+legFXCsNcWlX2Ez7C+Nob0OuPjEpjU8ank9jmZjDac Gk3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=899GESZRsKNDKJUPjgkSTANdjTR6H3fnpcwp6mKfLQw=; b=Qf7rDrS02eddb3c1Vam4slxIzGS+TNnBSa60PzrWA37z25vBX+OSJ4aDhT9TpH98El B3Q3OIzRdvM9ZVYqcC+8jS4UyiDLN3jybu/1jy/n/5sTpNyhW6DfvPuYr56jqdw9EeLQ qT41KKLnD+F7gowRrcZ0ITAiGUN02QtXcP5DkBj/pkZcZzIyw4AY3OJY30j/Mvs+Clvc pliWbvQ3DmDp87IbXN2K5UgfzF5iEZK0kmQdAb5nXFuRo0agM0Z3NEkeUBmVo+iQmhwv Rf1WSrASKHNpC3z0jRkOj8Vyv8ax37wBvxhl6fuNGPu8VJDTdMmV7mP08jSLJpMWBRG8 Qa/g== X-Gm-Message-State: AOAM5309e5wm2c7SbqfBPE3uzRiTIMulN1EEK5LkoHlTsqzcRp57spkP oLx/4+clzFCCd5V0EGHMoKYmveWZdm5hlg== X-Google-Smtp-Source: ABdhPJzBrfnCugKk9uBqawxXykpT03dSRpAWSXxl9sqD7EpZC39HTjKk1fNjSbKQANZWQuJZcRxzUg== X-Received: by 2002:adf:e80f:: with SMTP id o15mr7889152wrm.366.1612427542250; Thu, 04 Feb 2021 00:32:22 -0800 (PST) Received: from localhost.localdomain ([88.122.66.28]) by smtp.gmail.com with ESMTPSA id x9sm5657238wmb.14.2021.02.04.00.32.21 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Feb 2021 00:32:21 -0800 (PST) From: Loic Poulain To: kuba@kernel.org, davem@davemloft.net Cc: willemdebruijn.kernel@gmail.com, netdev@vger.kernel.org, stranche@codeaurora.org, subashab@codeaurora.org, Loic Poulain Subject: [PATCH net-next v5 2/2] net: qualcomm: rmnet: Fix rx_handler for non-linear skbs Date: Thu, 4 Feb 2021 09:40:01 +0100 Message-Id: <1612428002-12333-2-git-send-email-loic.poulain@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1612428002-12333-1-git-send-email-loic.poulain@linaro.org> References: <1612428002-12333-1-git-send-email-loic.poulain@linaro.org> Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org There is no guarantee that rmnet rx_handler is only fed with linear skbs, but current rmnet implementation does not check that, leading to crash in case of non linear skbs processed as linear ones. Fix that by ensuring skb linearization before processing. Signed-off-by: Loic Poulain Acked-by: Willem de Bruijn Reviewed-by: Subash Abhinov Kasiviswanathan --- v2: Add this patch to the series to prevent crash v3: no change v4: Fix skb leak in case of skb_linearize failure v5: no change drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c index 3d7d3ab..3d00b32 100644 --- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c +++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_handlers.c @@ -183,6 +183,11 @@ rx_handler_result_t rmnet_rx_handler(struct sk_buff **pskb) if (!skb) goto done; + if (skb_linearize(skb)) { + kfree_skb(skb); + goto done; + } + if (skb->pkt_type == PACKET_LOOPBACK) return RX_HANDLER_PASS;