From patchwork Wed Sep 6 10:35:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liang Chen X-Patchwork-Id: 13375507 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D95D67E for ; Wed, 6 Sep 2023 10:35:29 +0000 (UTC) Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6390CF2 for ; Wed, 6 Sep 2023 03:35:26 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-68a56401c12so2432626b3a.2 for ; Wed, 06 Sep 2023 03:35:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693996526; x=1694601326; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=10itrWSaGTKK84gZw2IR049sq0WOgrZENzWvQit+BlI=; b=Vn78BmVRJxUNCY6WEBeRyAjqvCuHkVL9Tn84amqrHBLCuUcb5bWL24u+4RU7yY6733 Ktz4fNIQN37o6LFT3sQzZ0RJ8kJ8pE6rTCqnthqBrFOlhc7VlOn0Ui6CsZ7nXI5W/w37 OGM1UU+6ubqri0VTlxUtsWZZnhMQVqz/RItE0ZfiLnTIIqttj88SwP08lwg4/wDduNfp U2oehUVDk6s3YnHR7+Oh9aFlVtGhJALww8ZQ3/GW/I+LoLwQ7xvAxa9GGQwFicXCWhIK lW3aeFPXdNCI7xWENJzmTHwGpmZJbof1en1IEAfkSHkJZJVNVqK/7a0jOTtjW20fUa8r VaCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693996526; x=1694601326; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=10itrWSaGTKK84gZw2IR049sq0WOgrZENzWvQit+BlI=; b=J8fp1oN+goR+m9vX3+K6rwaIWaJIYlKd9rFCxSG9MdI875vth27EufojWyAs5FLroo bb8dvJCupl6CJ6ZGLnnOBMNerw72jVRnNXZnlEIeXRJwqh+YfhYDrIvU2CFV4gl1Yhvw f2YBZRNbcdQDLtJzX1y7+pBf4FjHeL7R5CyuMrJ+TKPgSymJ/FG1mlEwvTqIhwnFfq1K VMuuG6EKKn8EQa0qQ25CC1RSCghYPOu93Fx4sdwB2tiEqjNzFgDidyDDpVsFny7V4yMk eGrPrR4i6s8MFagK7OUBTFiry+Op8WokB28pmLKWRFNkC/q8UmaSSbmPPXNPGzD95Apq DwsA== X-Gm-Message-State: AOJu0YwzW8WSA2l8Nl6R9QYWwJx9EkrLLBe2jIoUeUI1OMxH49y+ZDeD FxbWISTOLPgR0ENhz8Q1UzU= X-Google-Smtp-Source: AGHT+IHOgVIagstlepBJtXOm1nFySSqX90I20/C5wYx2GcrNG4hZ+xQQqtZkctK56K0qtgg/nrJ53w== X-Received: by 2002:a05:6a21:66c7:b0:14d:792:aaf8 with SMTP id ze7-20020a056a2166c700b0014d0792aaf8mr12366762pzb.25.1693996526199; Wed, 06 Sep 2023 03:35:26 -0700 (PDT) Received: from localhost.localdomain ([96.44.161.4]) by smtp.gmail.com with ESMTPSA id c5-20020aa781c5000000b0068a3dd6c1dasm10923246pfn.142.2023.09.06.03.35.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 03:35:23 -0700 (PDT) From: Liang Chen To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com Cc: netdev@vger.kernel.org, liangchen.linux@gmail.com Subject: [RFC PATCH net-next] pktgen: Introducing a parameter for non-shared skb testing Date: Wed, 6 Sep 2023 18:35:08 +0800 Message-Id: <20230906103508.6789-1-liangchen.linux@gmail.com> X-Mailer: git-send-email 2.31.1 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Currently, skbs generated by pktgen always have their reference count incremented before transmission, leading to two issues: 1. Only the code paths for shared skbs can be tested. 2. Skbs can only be released by pktgen. To enhance testing comprehensiveness, introducing the "skb_single_user" parameter, which allows skbs with a reference count of 1 to be transmitted. So we can test non-shared skbs and code paths where skbs are released within the network stack. Signed-off-by: Liang Chen --- net/core/pktgen.c | 39 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 36 insertions(+), 3 deletions(-) diff --git a/net/core/pktgen.c b/net/core/pktgen.c index f56b8d697014..8f48272b9d4b 100644 --- a/net/core/pktgen.c +++ b/net/core/pktgen.c @@ -423,6 +423,7 @@ struct pktgen_dev { __u32 skb_priority; /* skb priority field */ unsigned int burst; /* number of duplicated packets to burst */ int node; /* Memory node */ + int skb_single_user; /* allow single user skb for transmission */ #ifdef CONFIG_XFRM __u8 ipsmode; /* IPSEC mode (config) */ @@ -1805,6 +1806,17 @@ static ssize_t pktgen_if_write(struct file *file, return count; } + if (!strcmp(name, "skb_single_user")) { + len = num_arg(&user_buffer[i], 1, &value); + if (len < 0) + return len; + + i += len; + pkt_dev->skb_single_user = value; + sprintf(pg_result, "OK: skb_single_user=%u", pkt_dev->skb_single_user); + return count; + } + sprintf(pkt_dev->result, "No such parameter \"%s\"", name); return -EINVAL; } @@ -3460,6 +3472,14 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev) return; } + /* If clone_skb, burst, or count parameters are configured, + * it implies the need for skb reuse, hence single user skb + * transmission is not allowed. + */ + if (pkt_dev->skb_single_user && (pkt_dev->clone_skb || + burst > 1 || pkt_dev->count)) + pkt_dev->skb_single_user = 0; + /* If no skb or clone count exhausted then get new one */ if (!pkt_dev->skb || (pkt_dev->last_ok && ++pkt_dev->clone_count >= pkt_dev->clone_skb)) { @@ -3483,7 +3503,8 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev) if (pkt_dev->xmit_mode == M_NETIF_RECEIVE) { skb = pkt_dev->skb; skb->protocol = eth_type_trans(skb, skb->dev); - refcount_add(burst, &skb->users); + if (!pkt_dev->skb_single_user) + refcount_add(burst, &skb->users); local_bh_disable(); do { ret = netif_receive_skb(skb); @@ -3491,6 +3512,12 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev) pkt_dev->errors++; pkt_dev->sofar++; pkt_dev->seq_num++; + + if (pkt_dev->skb_single_user) { + pkt_dev->skb = NULL; + break; + } + if (refcount_read(&skb->users) != burst) { /* skb was queued by rps/rfs or taps, * so cannot reuse this skb @@ -3509,7 +3536,8 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev) goto out; /* Skips xmit_mode M_START_XMIT */ } else if (pkt_dev->xmit_mode == M_QUEUE_XMIT) { local_bh_disable(); - refcount_inc(&pkt_dev->skb->users); + if (!pkt_dev->skb_single_user) + refcount_inc(&pkt_dev->skb->users); ret = dev_queue_xmit(pkt_dev->skb); switch (ret) { @@ -3517,6 +3545,8 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev) pkt_dev->sofar++; pkt_dev->seq_num++; pkt_dev->tx_bytes += pkt_dev->last_pkt_size; + if (pkt_dev->skb_single_user) + pkt_dev->skb = NULL; break; case NET_XMIT_DROP: case NET_XMIT_CN: @@ -3549,7 +3579,8 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev) pkt_dev->last_ok = 0; goto unlock; } - refcount_add(burst, &pkt_dev->skb->users); + if (!pkt_dev->skb_single_user) + refcount_add(burst, &pkt_dev->skb->users); xmit_more: ret = netdev_start_xmit(pkt_dev->skb, odev, txq, --burst > 0); @@ -3560,6 +3591,8 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev) pkt_dev->sofar++; pkt_dev->seq_num++; pkt_dev->tx_bytes += pkt_dev->last_pkt_size; + if (pkt_dev->skb_single_user) + pkt_dev->skb = NULL; if (burst > 0 && !netif_xmit_frozen_or_drv_stopped(txq)) goto xmit_more; break;