From patchwork Wed Apr 10 19:05:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 13625006 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3562180A8A for ; Wed, 10 Apr 2024 19:05:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712775911; cv=none; b=TuRB8cME/M6pEBfSuWKZ54b0FGL4LucSzSRCaGQJaNOsaBvY9+8oRPzLUvlvyeGt9E7L1xsLEcGFhvua7Sr3racxIGDM2loD4v0GNyw516zXDrC/5nVkkerU1cvynsh9sNLjIoo3hjguJDs56HDok8svpyi8Rze3MlafJmNyqJM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712775911; c=relaxed/simple; bh=fmONSPwwncQRNVe6gLdC7y2dFGVtjszuSrUkmPvCfMc=; h=Date:Mime-Version:Message-ID:Subject:From:To:Cc:Content-Type; b=gztmGzzit6KwwlDV0oU8Jmss8JWzIA6P91QXfIb3XIc7KyM/6M0PxGGHAoLGVvHBnQ3ywcmpTuQ6ugEB2Vs5v7JsW4SYt3wWyUhEPBtI3Iw5Zb25aKv5eVUV7vhoWm8nHt0KDIkK9iOKvzqt/DJAeP+l12b7b+ceOI77ipQb1vY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=x+VEU+E5; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="x+VEU+E5" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-617fffab703so60016737b3.0 for ; Wed, 10 Apr 2024 12:05:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1712775909; x=1713380709; darn=vger.kernel.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=NUsqaY3FlD+Zy5gL3/6y26rsQKtdUSDBl6T6cUgvwGY=; b=x+VEU+E5c4GbDaPYaNdZcxDOslPEALGSRJLuem9mUjVeMcSNBLP3ASaxZPQqQ5XUQZ GZMXRyCkhx2ZoxxyRLPMKrCqPZ+3yvuZEPa5AUd/6FJ8/yzq1XcF4hnROHBlxlK7Vgcx 2pyZa2zNev3R91omeeXv2aO0gV3B/NJBobuMHBIRA1cVejh4jjPsHoitvpHL3Zd/40WE 8JXRx0qZSqdCyEyWcppCQJYcHhwxeQ66pzqYrqYfjyIhXJ7KWt3JYnWgKWiEzv/g1wpN 95gbBhqfIKateV6/hlTY1IgJOxtlJjoG83yPoufIOVhEi2xSPNNRakvoMC/x0k8EFMCl OBYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712775909; x=1713380709; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=NUsqaY3FlD+Zy5gL3/6y26rsQKtdUSDBl6T6cUgvwGY=; b=YWsIUS9a949lOUP6r2C8WFHrtgp5JLpQ6vkE54X5eG26MTjruSkFXArjfEMD82CkrX VtMXEja95U7UgpDVRxGzWa0OhdTfcT0Vf6EbGDd0MjyRQpXRH7LyNJwjTCmdUxpNa4pT MlU0/K5XvS0b0NQ0zu1YRI6ttJLsCEmg09FTcd9WsyWRu0IT8L73KpFeNk/y3EL47b4V PNfb23GvPT8ac/vwVhV2hhE4QETIVXtjrEv8ANBqgyD3lq4XrP60cNJioG0JwsPNj9+2 jnXGWkSBtL6vLxLz3xHTXTTfHW7VNLig/oUUHt5ylxxp2Wh+xVQPPDVEjYAObjVHmQsY pgrg== X-Forwarded-Encrypted: i=1; AJvYcCUbEEfHO1dPK9xQUyg4TZZPYLnpsvt3NrU+mLik+WU40sCc+Op80etIn6Iwu44PIzlKk3nbjMHV86ybPhoGuRtrzi02ZNWW X-Gm-Message-State: AOJu0YxPodWQVZ/nuptFzNbhTYyE5nzwl+4mdzbjY/lmONcXZnFP9Wew 97L4cUIhGah5x7wjrfrxsPpg6oFEav7mSxsyQkbsX8TMA22vmuVkIj+gSp/zc69eI79zC/kDqbM iCisyRDatw9G6/5/RQLKbaA== X-Google-Smtp-Source: AGHT+IEqVOgcwBysoxKVNXvqkW6QcqgDsjz5wi5Y/eDCWWMsTGxmIgG0Wg30HM+okFcwb6g2q2KBLiBBmJSIPM1hCA== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:21f0:1a3a:493e:cf21]) (user=almasrymina job=sendgmr) by 2002:a5b:34e:0:b0:dc7:4af0:8c6c with SMTP id q14-20020a5b034e000000b00dc74af08c6cmr344899ybp.6.1712775908813; Wed, 10 Apr 2024 12:05:08 -0700 (PDT) Date: Wed, 10 Apr 2024 12:05:00 -0700 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.44.0.478.gd926399ef9-goog Message-ID: <20240410190505.1225848-1-almasrymina@google.com> Subject: [PATCH net-next v6 0/2] Minor cleanups to skb frag ref/unref From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-rdma@vger.kernel.org Cc: Mina Almasry , Ayush Sawal , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Mirko Lindner , Stephen Hemminger , Tariq Toukan , Wei Liu , Paul Durrant , Steffen Klassert , Herbert Xu , David Ahern , Boris Pismenny , John Fastabend , Dragos Tatulea X-Patchwork-Delegate: kuba@kernel.org v6: - Rebased on top of net-next; dropped the merged patches. - Move skb ref helpers to a new header file. (Jakub). v5: - Applied feedback from Eric to inline napi_pp_get_page(). - Applied Reviewed-By's. v4: - Rebased to net-next. - Clarified skb_shift() code change in commit message. - Use skb->pp_recycle in a couple of places where I previously hardcoded 'false'. v3: - Fixed patchwork build errors/warnings from patch-by-patch modallconfig build v2: - Removed RFC tag. - Rebased on net-next after the merge window opening. - Added 1 patch at the beginning, "net: make napi_frag_unref reuse skb_page_unref" because a recent patch introduced some code duplication that can also be improved. - Addressed feedback from Dragos & Yunsheng. - Added Dragos's Reviewed-by. This series is largely motivated by a recent discussion where there was some confusion on how to properly ref/unref pp pages vs non pp pages: https://lore.kernel.org/netdev/CAHS8izOoO-EovwMwAm9tLYetwikNPxC0FKyVGu1TPJWSz4bGoA@mail.gmail.com/T/#t There is some subtely there because pp uses page->pp_ref_count for refcounting, while non-pp uses get_page()/put_page() for ref counting. Getting the refcounting pairs wrong can lead to kernel crash. Additionally currently it may not be obvious to skb users unaware of page pool internals how to properly acquire a ref on a pp frag. It requires checking of skb->pp_recycle & is_pp_page() to make the correct calls and may require some handling at the call site aware of arguable pp internals. This series is a minor refactor with a couple of goals: 1. skb users should be able to ref/unref a frag using [__]skb_frag_[un]ref() functions without needing to understand pp concepts and pp_ref_count vs get/put_page() differences. 2. reference counting functions should have a mirror opposite. I.e. there should be a foo_unref() to every foo_ref() with a mirror opposite implementation (as much as possible). This is RFC to collect feedback if this change is desirable, but also so that I don't race with the fix for the issue Dragos is seeing for his crash. https://lore.kernel.org/lkml/CAHS8izN436pn3SndrzsCyhmqvJHLyxgCeDpWXA4r1ANt3RCDLQ@mail.gmail.com/T/ Cc: Dragos Tatulea Mina Almasry (2): net: move skb ref helpers to new header net: mirror skb frag ref/unref helpers .../chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 3 +- drivers/net/ethernet/marvell/sky2.c | 1 + drivers/net/ethernet/mellanox/mlx4/en_rx.c | 1 + drivers/net/ethernet/sun/cassini.c | 5 +- drivers/net/veth.c | 3 +- drivers/net/xen-netback/netback.c | 1 + include/linux/skbuff.h | 63 ----------- include/linux/skbuff_ref.h | 106 ++++++++++++++++++ net/core/gro.c | 1 + net/core/skbuff.c | 47 +------- net/ipv4/esp4.c | 1 + net/ipv4/tcp_output.c | 1 + net/ipv6/esp6.c | 1 + net/tls/tls_device.c | 1 + net/tls/tls_device_fallback.c | 3 +- net/tls/tls_strp.c | 1 + 16 files changed, 129 insertions(+), 110 deletions(-) create mode 100644 include/linux/skbuff_ref.h