From patchwork Wed Mar 6 23:59:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13584917 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C813B1CF80 for ; Wed, 6 Mar 2024 23:59:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709769571; cv=none; b=IrYr8W4vJHf/S7VnHJJ6ZRX/iVeI/wmYEI51NQ+1ATW/rEz4rC8Gy8dB0epE56e8cj+zrohQsx6qVLAgKG8jidGfS+y9tmAZq1PaF/8e2a6dFM0jxJU4susvYPzvaNDN5SxVW4WiUPYe8wQ6GIWaYIvmily7ViP2LtCamt32TqY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709769571; c=relaxed/simple; bh=khDX1NqsTJ9RPOY9O4VL8crdpbXjnQtyVbhkxavyF4c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Nt9v23AGYCTCzwZ2kkK+JbFEVFDQ0h5KEpn7j/RfU3PiScUXZK/kSl/lh6MbMiRLm4VLex0IPyYNNE3KYVMjfZwOpdRwpI0K8n5jF8NHz4o1pqQWkmrm7yDebxQ3h/f+M6jnt2A51jO8wTvtdxRplwHmakNqTYOu+vUhDTQeS0Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ztbWNPcz; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ztbWNPcz" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6b26845cdso376230276.3 for ; Wed, 06 Mar 2024 15:59:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1709769569; x=1710374369; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bsusLBjAmN/j7GjAiSGCt1oC9MYmKOkLL/UwY6A5Vuw=; b=ztbWNPczxE1TcKjbeUEecDqxEvyrdI2rTkhJO+LZ8loh7JYRUTuXalfHRQZAw/damW Z0jcAczRl6yX/DbtRpeNyNjQd7bhs0qUZfk4+A/QPbE2T1Du+p1nNDSgIwH0+EeEZuoX Q3Edy6UUPKkitkGe1BL/Ntj+u8lWQX2uqAp1UQ5rMMSggbZArHY78FkbO8JyhLnOde0O p9tyOlj1DcDbdPEUwcOIsQWhjlNKt1kyhox7i45VQlQsz08PlTThLFkdsO571XHm5kpx Td4oFpGIWujS0us4+2/65ICV8hYieH3gnai23SGYX4Q6tDpZzdmjdabyQ0pPD1/P+HT6 j7EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709769569; x=1710374369; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bsusLBjAmN/j7GjAiSGCt1oC9MYmKOkLL/UwY6A5Vuw=; b=Uqn7Lh3eATiZmqUiftX8as3bZqutsg/HqmNoRz+SAl0N1sNgdWHGCH/fQyk1B/OTDk mpzh77qbu5gRRtbW9FadUVeSDwQXVtfhkPN4kKb+MvKJBmmFy908EwM5Ex4l1ADz0mUH c99CIlnufQyZ9ei6XWpdvOlCo0YkFlqP00/+6rfpFOPiBBdvNPklDKNkaGP1E2ZlfUPB UcL1tYArAoWPLcNO0UyVs5/qcmGk/UYFJTxo56EQSNkx2MKxQ2fCNruGCIwVT+YTXNzw ed8NLZz8lIzY1N7/YfcC9bQE8tV5X4v8HLVCPIUWCd77oUpj/as/whu7ZuxuvBPvvlPV zedA== X-Gm-Message-State: AOJu0Yx8kB1KXFy7bLUgBzXovXciAGzqeLJV4rlfHpCQPQlbA9zSKOwT m2nUt8upGzHo03C7gBx5LqCQLls62m77s+IjvPj14KzV+s6bCnjQv0HgCgu+Qbo2vLbTelB8tjU iYnjTq/Dg6l9kTUP/a31hsCB0TTmgmR0QAC/AlaZ/BvgVcvl6mTQh+a7QIdSch+FMyFWjU40cF+ 94Ik/bdLNMXzoJL6v2OLDHgjf0sCgBkbTUUhD+4FVASnkeQQw2YiKOgv3Uuzg= X-Google-Smtp-Source: AGHT+IFok183EvQuz7zmWuQx/gTm3LYQx324qpqHRVGJifubx6V5VcYl7QKhTuMsK2W9+CKTPYwvTSx6S026SzGVrg== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:daeb:5bc6:353c:6d72]) (user=almasrymina job=sendgmr) by 2002:a05:6902:724:b0:dcc:5a91:aee9 with SMTP id l4-20020a056902072400b00dcc5a91aee9mr4017202ybt.7.1709769568316; Wed, 06 Mar 2024 15:59:28 -0800 (PST) Date: Wed, 6 Mar 2024 15:59:19 -0800 In-Reply-To: <20240306235922.282781-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240306235922.282781-1-almasrymina@google.com> X-Mailer: git-send-email 2.44.0.278.ge034bb2e1d-goog Message-ID: <20240306235922.282781-2-almasrymina@google.com> Subject: [RFC PATCH net-next v1 1/2] net: mirror skb frag ref/unref helpers From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org Cc: Mina Almasry , Mirko Lindner , Stephen Hemminger , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tariq Toukan , Boris Pismenny , John Fastabend , Dragos Tatulea X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC Refactor some of the skb frag ref/unref helpers for improved clarity. Implement napi_pp_get_page() to be the mirror counterpart of napi_pp_put_page(). Implement napi_frag_ref() to be the mirror counterpart of napi_frag_unref(). Improve __skb_frag_ref() to become a mirror counterpart of __skb_frag_unref(). Previously unref could handle pp & non-pp pages, while the ref could only handle non-pp pages. Now both the ref & unref helpers can correctly handle both pp & non-pp pages. Now that __skb_frag_ref() can handle both pp & non-pp pages, remove skb_pp_frag_ref(), and use __skb_frag_ref() instead. This lets us remove pp specific handling from skb_try_coalesce. Signed-off-by: Mina Almasry --- include/linux/skbuff.h | 24 +++++++++++++++--- net/core/skbuff.c | 56 ++++++++++++++---------------------------- 2 files changed, 39 insertions(+), 41 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index d577e0bee18d..51316b0e20bc 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3477,15 +3477,31 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) return netmem_to_page(frag->netmem); } +bool napi_pp_get_page(struct page *page); + +static inline void napi_frag_ref(skb_frag_t *frag, bool recycle) +{ +#ifdef CONFIG_PAGE_POOL + struct page *page = skb_frag_page(frag); + + if (recycle && napi_pp_get_page(page)) + return; +#endif + get_page(page); +} + /** * __skb_frag_ref - take an addition reference on a paged fragment. * @frag: the paged fragment + * @recycle: skb->pp_recycle param of the parent skb. * - * Takes an additional reference on the paged fragment @frag. + * Takes an additional reference on the paged fragment @frag. Obtains the + * correct reference count depending on whether skb->pp_recycle is set and + * whether the frag is a page pool frag. */ -static inline void __skb_frag_ref(skb_frag_t *frag) +static inline void __skb_frag_ref(skb_frag_t *frag, bool recycle) { - get_page(skb_frag_page(frag)); + napi_frag_ref(frag, recycle); } /** @@ -3497,7 +3513,7 @@ static inline void __skb_frag_ref(skb_frag_t *frag) */ static inline void skb_frag_ref(struct sk_buff *skb, int f) { - __skb_frag_ref(&skb_shinfo(skb)->frags[f]); + __skb_frag_ref(&skb_shinfo(skb)->frags[f], skb->pp_recycle); } int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb, diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 1f918e602bc4..6d234faa9d9e 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1006,6 +1006,21 @@ int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb, EXPORT_SYMBOL(skb_cow_data_for_xdp); #if IS_ENABLED(CONFIG_PAGE_POOL) +bool napi_pp_get_page(struct page *page) +{ + + struct page *head_page; + + head_page = compound_head(page); + + if (!is_pp_page(page)) + return false; + + page_pool_ref_page(head_page); + return true; +} +EXPORT_SYMBOL(napi_pp_get_page); + bool napi_pp_put_page(struct page *page, bool napi_safe) { bool allow_direct = false; @@ -1058,37 +1073,6 @@ static bool skb_pp_recycle(struct sk_buff *skb, void *data, bool napi_safe) return napi_pp_put_page(virt_to_page(data), napi_safe); } -/** - * skb_pp_frag_ref() - Increase fragment references of a page pool aware skb - * @skb: page pool aware skb - * - * Increase the fragment reference count (pp_ref_count) of a skb. This is - * intended to gain fragment references only for page pool aware skbs, - * i.e. when skb->pp_recycle is true, and not for fragments in a - * non-pp-recycling skb. It has a fallback to increase references on normal - * pages, as page pool aware skbs may also have normal page fragments. - */ -static int skb_pp_frag_ref(struct sk_buff *skb) -{ - struct skb_shared_info *shinfo; - struct page *head_page; - int i; - - if (!skb->pp_recycle) - return -EINVAL; - - shinfo = skb_shinfo(skb); - - for (i = 0; i < shinfo->nr_frags; i++) { - head_page = compound_head(skb_frag_page(&shinfo->frags[i])); - if (likely(is_pp_page(head_page))) - page_pool_ref_page(head_page); - else - page_ref_inc(head_page); - } - return 0; -} - static void skb_kfree_head(void *head, unsigned int end_offset) { if (end_offset == SKB_SMALL_HEAD_HEADROOM) @@ -4199,7 +4183,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen) to++; } else { - __skb_frag_ref(fragfrom); + __skb_frag_ref(fragfrom, skb->pp_recycle); skb_frag_page_copy(fragto, fragfrom); skb_frag_off_copy(fragto, fragfrom); skb_frag_size_set(fragto, todo); @@ -4849,7 +4833,7 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb, } *nskb_frag = (i < 0) ? skb_head_frag_to_page_desc(frag_skb) : *frag; - __skb_frag_ref(nskb_frag); + __skb_frag_ref(nskb_frag, nskb->pp_recycle); size = skb_frag_size(nskb_frag); if (pos < offset) { @@ -5980,10 +5964,8 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from, /* if the skb is not cloned this does nothing * since we set nr_frags to 0. */ - if (skb_pp_frag_ref(from)) { - for (i = 0; i < from_shinfo->nr_frags; i++) - __skb_frag_ref(&from_shinfo->frags[i]); - } + for (i = 0; i < from_shinfo->nr_frags; i++) + __skb_frag_ref(&from_shinfo->frags[i], from->pp_recycle); to->truesize += delta; to->len += len; From patchwork Wed Mar 6 23:59:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13584918 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E64A1CFBE for ; Wed, 6 Mar 2024 23:59:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709769573; cv=none; b=F2uKqNNP0C8Ylh4+gisWEGlEAGr/LpSzUYclvCrtLm06BHnWYxudRAHgmtr+Wk5woOEgW1KmuGqfDim6fsLrzs9xysdU5lCIVWKRusFzXyt1tYolt/SM4v6DrmQB5CWX73NKZc3G4AUeToMbxSUBf9Kv+WKixXmqPZ87ZEJI4p4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709769573; c=relaxed/simple; bh=/0/Xh5s59K8DQ+aUwf9pWpNrpaQktzi3v3x1ACLmFCU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=D78Xq8f5NhCTRszLJ0psxT5UAgg/iK2++RZz+vY63boimmdtm9D0pgBBRNTkI+r91yCjxDLvKuE+iPWNLw7WwSwzuLARU7NbJAY1bSck1z/Jo37FxpyWKfIiDgp42PIwWtgpXdpAJAT4F8XC4gs5z078eIXUTsBpZQaXg1MlUdY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=V2uPKRGe; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="V2uPKRGe" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-60966f363c1so4952767b3.3 for ; Wed, 06 Mar 2024 15:59:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1709769570; x=1710374370; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gmMk3gOmMaNgwAKEEtV++O/iVrn9YTvaRrJG2G+aWtQ=; b=V2uPKRGe32CNLXDva6wx/8s/QdsCdRKki8R2vumic/qNbQfn11IISYFPZf2M/uc9l+ MlavsBNw6jXj5Smd7CXi8Hod7ndKxyLvLJfvKQUDJAs4XmwDCKPUXFAVBWcvSbYxeGck HfeT34/Dm0WDiQSFZte2pVm77QMYHEbYLX8aAg3wpP41emyyu9KGu3gKav7CSfYIrLpN oeZqAKz60Zm+6gV+sK50RR3AVawKoFrnpatVvXNE+SbwKMXeGMXuJQbA0LXsCIp7Jqa8 0uAe+e0ckZ0ITHvrHJmCK608Bc48mIUJ7OwaeMIJfG1EAs3sffdXTMVx/Yths02qrj35 kHzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709769570; x=1710374370; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gmMk3gOmMaNgwAKEEtV++O/iVrn9YTvaRrJG2G+aWtQ=; b=VI9Rj0vgRm2Oc99tW9pTOjplX8TG9pgJ7MPM9tVOdBOYDDrb5mMCNG3gleBOfhXmS/ wzL/6CHodm7Y+RM8FKzow0cOFf4a21s7Ulojt/2oL63KVz+ChkZVl3L5Do0KqFXugYlF aAFkWzoIdHHET57zSAgdWFMu6OJkNyNIZqok/9njO6n9UsrIhh4/bdMd4jj6AsuPdI0B Iylf9ro6QA058FRcG9arDyFjTKME+O9dbOo4iXcI9TcjcToBcW/pAhP0ew/vqqfCGqT8 v5ZbKWZ2GXX7ug1zoWgxcdhucZQ4b1JvqEj0lspKKT4jnCXOTeuitgCJcIRBnqmhsW02 YL3A== X-Gm-Message-State: AOJu0YxceO2NVKOOwudhiHc042wNN+THDOBtw1dTosDhaH4ocU+Fuarc y86fQPYB2JgtRt2e5Gy7Z3ZfZIzmlP6tOSbXIQZJcHFVLYKLkJEfwwbrT9Hsl7GCV2P6i9tJFEx Ux/glN3RI12SpEZNobQ45OtXcRjfLEtG8GbEuchrCOcZC891o+B0gQ5v6uyrAzZwMIgSzryorEl OmjhmJsUyEe26Z1kea/tKUpc6A3DDtg3WAxnJe5B4UPF1ehVl/e/z1XyOdkBY= X-Google-Smtp-Source: AGHT+IFzF/vYim59RDsixU6T89U85ltGO0VLxE6ZaLQPThtsVBOH3MLy+N4YXZ9E8nPWEe6eLy/SXdBKVmWZSItZsQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:daeb:5bc6:353c:6d72]) (user=almasrymina job=sendgmr) by 2002:a81:7945:0:b0:609:22ea:f95e with SMTP id u66-20020a817945000000b0060922eaf95emr4425822ywc.4.1709769570458; Wed, 06 Mar 2024 15:59:30 -0800 (PST) Date: Wed, 6 Mar 2024 15:59:20 -0800 In-Reply-To: <20240306235922.282781-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240306235922.282781-1-almasrymina@google.com> X-Mailer: git-send-email 2.44.0.278.ge034bb2e1d-goog Message-ID: <20240306235922.282781-3-almasrymina@google.com> Subject: [RFC PATCH net-next v1 2/2] net: remove napi_frag_[un]ref From: Mina Almasry To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org Cc: Mina Almasry , Mirko Lindner , Stephen Hemminger , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Tariq Toukan , Boris Pismenny , John Fastabend , Dragos Tatulea X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC With the changes in the last patch, napi_frag_[un]ref() helpers become reduandant. Remove them, and use __skb_frag_[un]ref() directly. Signed-off-by: Mina Almasry Reviewed-by: Dragos Tatulea --- drivers/net/ethernet/marvell/sky2.c | 2 +- drivers/net/ethernet/mellanox/mlx4/en_rx.c | 2 +- include/linux/skbuff.h | 45 +++++++++------------- net/core/skbuff.c | 4 +- net/tls/tls_device.c | 2 +- net/tls/tls_strp.c | 2 +- 6 files changed, 24 insertions(+), 33 deletions(-) diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c index 07720841a8d7..8e00a5856856 100644 --- a/drivers/net/ethernet/marvell/sky2.c +++ b/drivers/net/ethernet/marvell/sky2.c @@ -2501,7 +2501,7 @@ static void skb_put_frags(struct sk_buff *skb, unsigned int hdr_space, if (length == 0) { /* don't need this page */ - __skb_frag_unref(frag, false); + __skb_frag_unref(frag, false, false); --skb_shinfo(skb)->nr_frags; } else { size = min(length, (unsigned) PAGE_SIZE); diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c index eac49657bd07..4dbf29b46979 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c @@ -526,7 +526,7 @@ static int mlx4_en_complete_rx_desc(struct mlx4_en_priv *priv, fail: while (nr > 0) { nr--; - __skb_frag_unref(skb_shinfo(skb)->frags + nr, false); + __skb_frag_unref(skb_shinfo(skb)->frags + nr, false, false); } return 0; } diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 51316b0e20bc..9cd04c315592 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3479,17 +3479,6 @@ static inline struct page *skb_frag_page(const skb_frag_t *frag) bool napi_pp_get_page(struct page *page); -static inline void napi_frag_ref(skb_frag_t *frag, bool recycle) -{ -#ifdef CONFIG_PAGE_POOL - struct page *page = skb_frag_page(frag); - - if (recycle && napi_pp_get_page(page)) - return; -#endif - get_page(page); -} - /** * __skb_frag_ref - take an addition reference on a paged fragment. * @frag: the paged fragment @@ -3501,7 +3490,13 @@ static inline void napi_frag_ref(skb_frag_t *frag, bool recycle) */ static inline void __skb_frag_ref(skb_frag_t *frag, bool recycle) { - napi_frag_ref(frag, recycle); +#ifdef CONFIG_PAGE_POOL + struct page *page = skb_frag_page(frag); + + if (recycle && napi_pp_get_page(page)) + return; +#endif + get_page(page); } /** @@ -3522,29 +3517,25 @@ int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb, struct bpf_prog *prog); bool napi_pp_put_page(struct page *page, bool napi_safe); -static inline void -napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) -{ - struct page *page = skb_frag_page(frag); - -#ifdef CONFIG_PAGE_POOL - if (recycle && napi_pp_put_page(page, napi_safe)) - return; -#endif - put_page(page); -} - /** * __skb_frag_unref - release a reference on a paged fragment. * @frag: the paged fragment * @recycle: recycle the page if allocated via page_pool + * @napi_safe: set to true if running in the same napi context as where the + * consumer would run. * * Releases a reference on the paged fragment @frag * or recycles the page via the page_pool API. */ -static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle) +static inline void __skb_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) { - napi_frag_unref(frag, recycle, false); + struct page *page = skb_frag_page(frag); + +#ifdef CONFIG_PAGE_POOL + if (recycle && napi_pp_put_page(page, napi_safe)) + return; +#endif + put_page(page); } /** @@ -3559,7 +3550,7 @@ static inline void skb_frag_unref(struct sk_buff *skb, int f) struct skb_shared_info *shinfo = skb_shinfo(skb); if (!skb_zcopy_managed(skb)) - __skb_frag_unref(&shinfo->frags[f], skb->pp_recycle); + __skb_frag_unref(&shinfo->frags[f], skb->pp_recycle, false); } /** diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 6d234faa9d9e..ed7f7e960b78 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1114,7 +1114,7 @@ static void skb_release_data(struct sk_buff *skb, enum skb_drop_reason reason, } for (i = 0; i < shinfo->nr_frags; i++) - napi_frag_unref(&shinfo->frags[i], skb->pp_recycle, napi_safe); + __skb_frag_unref(&shinfo->frags[i], skb->pp_recycle, napi_safe); free_head: if (shinfo->frag_list) @@ -4205,7 +4205,7 @@ int skb_shift(struct sk_buff *tgt, struct sk_buff *skb, int shiftlen) fragto = &skb_shinfo(tgt)->frags[merge]; skb_frag_size_add(fragto, skb_frag_size(fragfrom)); - __skb_frag_unref(fragfrom, skb->pp_recycle); + __skb_frag_unref(fragfrom, skb->pp_recycle, false); } /* Reposition in the original skb */ diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index bf8ed36b1ad6..5dc6381f34fb 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -140,7 +140,7 @@ static void destroy_record(struct tls_record_info *record) int i; for (i = 0; i < record->num_frags; i++) - __skb_frag_unref(&record->frags[i], false); + __skb_frag_unref(&record->frags[i], false, false); kfree(record); } diff --git a/net/tls/tls_strp.c b/net/tls/tls_strp.c index ca1e0e198ceb..85b41f226978 100644 --- a/net/tls/tls_strp.c +++ b/net/tls/tls_strp.c @@ -196,7 +196,7 @@ static void tls_strp_flush_anchor_copy(struct tls_strparser *strp) DEBUG_NET_WARN_ON_ONCE(atomic_read(&shinfo->dataref) != 1); for (i = 0; i < shinfo->nr_frags; i++) - __skb_frag_unref(&shinfo->frags[i], false); + __skb_frag_unref(&shinfo->frags[i], false, false); shinfo->nr_frags = 0; if (strp->copy_mode) { kfree_skb_list(shinfo->frag_list);