From patchwork Fri Dec 16 12:45:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin Coddington X-Patchwork-Id: 13074990 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10EDAC001B2 for ; Fri, 16 Dec 2022 12:46:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229873AbiLPMqe (ORCPT ); Fri, 16 Dec 2022 07:46:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38846 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229770AbiLPMq2 (ORCPT ); Fri, 16 Dec 2022 07:46:28 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D22DCF5 for ; Fri, 16 Dec 2022 04:45:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671194739; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RRe7I7atfJDdvH3YMvA04g6cA61WJxGiVh9qKyM/ZxQ=; b=AhhVSbKr2spjAnMZ7XU7WHNTbokI8CDDNGHo5Qjv3e7HaN//Us+VyCrQZX5weHQETtVLnx arpkQdyAad6QLjRStpMgkgbbva7lA6dLZstraCXw6cRwFyZt2nrJB1YomqgMfweXlH1Hqb My/M+5I8UD3Rg4WMsury595BIYxeAkk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-593-139zCA0UOC-nCKqE5och2g-1; Fri, 16 Dec 2022 07:45:36 -0500 X-MC-Unique: 139zCA0UOC-nCKqE5och2g-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8929B85C06B; Fri, 16 Dec 2022 12:45:34 +0000 (UTC) Received: from bcodding.csb (unknown [10.22.50.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3B05914171BE; Fri, 16 Dec 2022 12:45:34 +0000 (UTC) Received: by bcodding.csb (Postfix, from userid 24008) id EB67A10C30E2; Fri, 16 Dec 2022 07:45:33 -0500 (EST) From: Benjamin Coddington To: Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet Cc: Guillaume Nault , Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe , Josef Bacik , Keith Busch , Christoph Hellwig , Sagi Grimberg , Lee Duncan , Chris Leech , Mike Christie , "James E.J. Bottomley" , "Martin K. Petersen" , Valentina Manea , Shuah Khan , Greg Kroah-Hartman , David Howells , Marc Dionne , Steve French , Christine Caulfield , David Teigland , Mark Fasheh , Joel Becker , Joseph Qi , Eric Van Hensbergen , Latchesar Ionkov , Dominique Martinet , Ilya Dryomov , Xiubo Li , Chuck Lever , Jeff Layton , Trond Myklebust , Anna Schumaker , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org Subject: [PATCH net v4 1/3] net: Introduce sk_use_task_frag in struct sock. Date: Fri, 16 Dec 2022 07:45:26 -0500 Message-Id: <6a2ed9386c45c6b9dd046ab4ffa078b27b5d8800.1671194454.git.bcodding@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Guillaume Nault Sockets that can be used while recursing into memory reclaim, like those used by network block devices and file systems, mustn't use current->task_frag: if the current process is already using it, then the inner memory reclaim call would corrupt the task_frag structure. To avoid this, sk_page_frag() uses ->sk_allocation to detect sockets that mustn't use current->task_frag, assuming that those used during memory reclaim had their allocation constraints reflected in ->sk_allocation. This unfortunately doesn't cover all cases: in an attempt to remove all usage of GFP_NOFS and GFP_NOIO, sunrpc stopped setting these flags in ->sk_allocation, and used memalloc_nofs critical sections instead. This breaks the sk_page_frag() heuristic since the allocation constraints are now stored in current->flags, which sk_page_frag() can't read without risking triggering a cache miss and slowing down TCP's fast path. This patch creates a new field in struct sock, named sk_use_task_frag, which sockets with memory reclaim constraints can set to false if they can't safely use current->task_frag. In such cases, sk_page_frag() now always returns the socket's page_frag (->sk_frag). The first user is sunrpc, which needs to avoid using current->task_frag but can keep ->sk_allocation set to GFP_KERNEL otherwise. Eventually, it might be possible to simplify sk_page_frag() by only testing ->sk_use_task_frag and avoid relying on the ->sk_allocation heuristic entirely (assuming other sockets will set ->sk_use_task_frag according to their constraints in the future). The new ->sk_use_task_frag field is placed in a hole in struct sock and belongs to a cache line shared with ->sk_shutdown. Therefore it should be hot and shouldn't have negative performance impacts on TCP's fast path (sk_shutdown is tested just before the while() loop in tcp_sendmsg_locked()). Link: https://lore.kernel.org/netdev/b4d8cb09c913d3e34f853736f3f5628abfd7f4b6.1656699567.git.gnault@redhat.com/ Signed-off-by: Guillaume Nault Reviewed-by: Benjamin Coddington --- include/net/sock.h | 11 +++++++++-- net/core/sock.c | 1 + 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index ecea3dcc2217..fefe1f4abf19 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -318,6 +318,9 @@ struct sk_filter; * @sk_stamp: time stamp of last packet received * @sk_stamp_seq: lock for accessing sk_stamp on 32 bit architectures only * @sk_tsflags: SO_TIMESTAMPING flags + * @sk_use_task_frag: allow sk_page_frag() to use current->task_frag. + * Sockets that can be used under memory reclaim should + * set this to false. * @sk_bind_phc: SO_TIMESTAMPING bind PHC index of PTP virtual clock * for timestamping * @sk_tskey: counter to disambiguate concurrent tstamp requests @@ -512,6 +515,7 @@ struct sock { u8 sk_txtime_deadline_mode : 1, sk_txtime_report_errors : 1, sk_txtime_unused : 6; + bool sk_use_task_frag; struct socket *sk_socket; void *sk_user_data; @@ -2561,14 +2565,17 @@ static inline void sk_stream_moderate_sndbuf(struct sock *sk) * socket operations and end up recursing into sk_page_frag() * while it's already in use: explicitly avoid task page_frag * usage if the caller is potentially doing any of them. - * This assumes that page fault handlers use the GFP_NOFS flags. + * This assumes that page fault handlers use the GFP_NOFS flags or + * explicitly disable sk_use_task_frag. * * Return: a per task page_frag if context allows that, * otherwise a per socket one. */ static inline struct page_frag *sk_page_frag(struct sock *sk) { - if ((sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | __GFP_FS)) == + if (sk->sk_use_task_frag && + (sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | + __GFP_FS)) == (__GFP_DIRECT_RECLAIM | __GFP_FS)) return ¤t->task_frag; diff --git a/net/core/sock.c b/net/core/sock.c index d2587d8712db..f954d5893e79 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -3390,6 +3390,7 @@ void sock_init_data(struct socket *sock, struct sock *sk) sk->sk_rcvbuf = READ_ONCE(sysctl_rmem_default); sk->sk_sndbuf = READ_ONCE(sysctl_wmem_default); sk->sk_state = TCP_CLOSE; + sk->sk_use_task_frag = true; sk_set_socket(sk, sock); sock_set_flag(sk, SOCK_ZAPPED); From patchwork Fri Dec 16 12:45:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Benjamin Coddington X-Patchwork-Id: 13074991 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D831C4332F for ; Fri, 16 Dec 2022 12:46:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230166AbiLPMqf (ORCPT ); Fri, 16 Dec 2022 07:46:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229780AbiLPMq3 (ORCPT ); Fri, 16 Dec 2022 07:46:29 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB886FD24 for ; Fri, 16 Dec 2022 04:45:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671194746; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8Qp6gOC44QiZ7NS8e8BpzQZa8V7+r1J7oYYhAvDy+Sc=; b=Hhc9jH1XnAXkdTRshaHYiRp6zu3bK591O0r4GmcaFsSCYpm/RcVHtKpSXh+0aulU62pA2d qGsI62X8wg/37j1gaEU8GZTf/zmX3+KFch5J8x20m2MGCMSgzD3FDLDw1ztuJNVxjarIIh ElADhfKYC1P/BCAgKAxIWPDz45+EjxI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-8VwB_XiYNSGdVXifxUijfg-1; Fri, 16 Dec 2022 07:45:36 -0500 X-MC-Unique: 8VwB_XiYNSGdVXifxUijfg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1ABAD85A588; Fri, 16 Dec 2022 12:45:35 +0000 (UTC) Received: from bcodding.csb (unknown [10.22.50.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id E10ABC15BA0; Fri, 16 Dec 2022 12:45:34 +0000 (UTC) Received: by bcodding.csb (Postfix, from userid 24008) id 8C55B10C30E1; Fri, 16 Dec 2022 07:45:34 -0500 (EST) From: Benjamin Coddington To: Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet Cc: Guillaume Nault , Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe , Josef Bacik , Keith Busch , Christoph Hellwig , Sagi Grimberg , Lee Duncan , Chris Leech , Mike Christie , "James E.J. Bottomley" , "Martin K. Petersen" , Valentina Manea , Shuah Khan , Greg Kroah-Hartman , David Howells , Marc Dionne , Steve French , Christine Caulfield , David Teigland , Mark Fasheh , Joel Becker , Joseph Qi , Eric Van Hensbergen , Latchesar Ionkov , Dominique Martinet , Ilya Dryomov , Xiubo Li , Chuck Lever , Jeff Layton , Trond Myklebust , Anna Schumaker , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org Subject: [PATCH net v4 2/3] Treewide: Stop corrupting socket's task_frag Date: Fri, 16 Dec 2022 07:45:27 -0500 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Since moving to memalloc_nofs_save/restore, SUNRPC has stopped setting the GFP_NOIO flag on sk_allocation which the networking system uses to decide when it is safe to use current->task_frag. The results of this are unexpected corruption in task_frag when SUNRPC is involved in memory reclaim. The corruption can be seen in crashes, but the root cause is often difficult to ascertain as a crashing machine's stack trace will have no evidence of being near NFS or SUNRPC code. I believe this problem to be much more pervasive than reports to the community may indicate. Fix this by having kernel users of sockets that may corrupt task_frag due to reclaim set sk_use_task_frag = false. Preemptively correcting this situation for users that still set sk_allocation allows them to convert to memalloc_nofs_save/restore without the same unexpected corruptions that are sure to follow, unlikely to show up in testing, and difficult to bisect. CC: Philipp Reisner CC: Lars Ellenberg CC: "Christoph Böhmwalder" CC: Jens Axboe CC: Josef Bacik CC: Keith Busch CC: Christoph Hellwig CC: Sagi Grimberg CC: Lee Duncan CC: Chris Leech CC: Mike Christie CC: "James E.J. Bottomley" CC: "Martin K. Petersen" CC: Valentina Manea CC: Shuah Khan CC: Greg Kroah-Hartman CC: David Howells CC: Marc Dionne CC: Steve French CC: Christine Caulfield CC: David Teigland CC: Mark Fasheh CC: Joel Becker CC: Joseph Qi CC: Eric Van Hensbergen CC: Latchesar Ionkov CC: Dominique Martinet CC: "David S. Miller" CC: Eric Dumazet CC: Jakub Kicinski CC: Paolo Abeni CC: Ilya Dryomov CC: Xiubo Li CC: Chuck Lever CC: Jeff Layton CC: Trond Myklebust CC: Anna Schumaker CC: Steffen Klassert CC: Herbert Xu CC: netdev@vger.kernel.org Suggested-by: Guillaume Nault Signed-off-by: Benjamin Coddington Reviewed-by: Guillaume Nault --- drivers/block/drbd/drbd_receiver.c | 3 +++ drivers/block/nbd.c | 1 + drivers/nvme/host/tcp.c | 1 + drivers/scsi/iscsi_tcp.c | 1 + drivers/usb/usbip/usbip_common.c | 1 + fs/cifs/connect.c | 1 + fs/dlm/lowcomms.c | 2 ++ fs/ocfs2/cluster/tcp.c | 1 + net/9p/trans_fd.c | 1 + net/ceph/messenger.c | 1 + net/sunrpc/xprtsock.c | 3 +++ net/xfrm/espintcp.c | 1 + 12 files changed, 17 insertions(+) diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c index 0e58a3187345..757f4692b5bd 100644 --- a/drivers/block/drbd/drbd_receiver.c +++ b/drivers/block/drbd/drbd_receiver.c @@ -1030,6 +1030,9 @@ static int conn_connect(struct drbd_connection *connection) sock.socket->sk->sk_allocation = GFP_NOIO; msock.socket->sk->sk_allocation = GFP_NOIO; + sock.socket->sk->sk_use_task_frag = false; + msock.socket->sk->sk_use_task_frag = false; + sock.socket->sk->sk_priority = TC_PRIO_INTERACTIVE_BULK; msock.socket->sk->sk_priority = TC_PRIO_INTERACTIVE; diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index e379ccc63c52..592cfa8b765a 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -512,6 +512,7 @@ static int sock_xmit(struct nbd_device *nbd, int index, int send, noreclaim_flag = memalloc_noreclaim_save(); do { sock->sk->sk_allocation = GFP_NOIO | __GFP_MEMALLOC; + sock->sk->sk_use_task_frag = false; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_control = NULL; diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index b69b89166b6b..8cedc1ef496c 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1537,6 +1537,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid) queue->sock->sk->sk_rcvtimeo = 10 * HZ; queue->sock->sk->sk_allocation = GFP_ATOMIC; + queue->sock->sk->sk_use_task_frag = false; nvme_tcp_set_queue_io_cpu(queue); queue->request = NULL; queue->data_remaining = 0; diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c index 5fb1f364e815..1d1cf641937c 100644 --- a/drivers/scsi/iscsi_tcp.c +++ b/drivers/scsi/iscsi_tcp.c @@ -738,6 +738,7 @@ iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session, sk->sk_reuse = SK_CAN_REUSE; sk->sk_sndtimeo = 15 * HZ; /* FIXME: make it configurable */ sk->sk_allocation = GFP_ATOMIC; + sk->sk_use_task_frag = false; sk_set_memalloc(sk); sock_no_linger(sk); diff --git a/drivers/usb/usbip/usbip_common.c b/drivers/usb/usbip/usbip_common.c index f8b326eed54d..a2b2da1255dd 100644 --- a/drivers/usb/usbip/usbip_common.c +++ b/drivers/usb/usbip/usbip_common.c @@ -315,6 +315,7 @@ int usbip_recv(struct socket *sock, void *buf, int size) do { sock->sk->sk_allocation = GFP_NOIO; + sock->sk->sk_use_task_frag = false; result = sock_recvmsg(sock, &msg, MSG_WAITALL); if (result <= 0) diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c index e80252a83225..7bc7b5e03c51 100644 --- a/fs/cifs/connect.c +++ b/fs/cifs/connect.c @@ -2944,6 +2944,7 @@ generic_ip_connect(struct TCP_Server_Info *server) cifs_dbg(FYI, "Socket created\n"); server->ssocket = socket; socket->sk->sk_allocation = GFP_NOFS; + socket->sk->sk_use_task_frag = false; if (sfamily == AF_INET6) cifs_reclassify_socket6(socket); else diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c index 8b80ca0cd65f..4450721ec83c 100644 --- a/fs/dlm/lowcomms.c +++ b/fs/dlm/lowcomms.c @@ -645,6 +645,7 @@ static void add_sock(struct socket *sock, struct connection *con) if (dlm_config.ci_protocol == DLM_PROTO_SCTP) sk->sk_state_change = lowcomms_state_change; sk->sk_allocation = GFP_NOFS; + sk->sk_use_task_frag = false; sk->sk_error_report = lowcomms_error_report; release_sock(sk); } @@ -1769,6 +1770,7 @@ static int dlm_listen_for_all(void) listen_con.sock = sock; sock->sk->sk_allocation = GFP_NOFS; + sock->sk->sk_use_task_frag = false; sock->sk->sk_data_ready = lowcomms_listen_data_ready; release_sock(sock->sk); diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c index 37d222bdfc8c..a07b24d170f2 100644 --- a/fs/ocfs2/cluster/tcp.c +++ b/fs/ocfs2/cluster/tcp.c @@ -1602,6 +1602,7 @@ static void o2net_start_connect(struct work_struct *work) sc->sc_sock = sock; /* freed by sc_kref_release */ sock->sk->sk_allocation = GFP_ATOMIC; + sock->sk->sk_use_task_frag = false; myaddr.sin_family = AF_INET; myaddr.sin_addr.s_addr = mynode->nd_ipv4_address; diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c index 07db2f436d44..d9120f14684b 100644 --- a/net/9p/trans_fd.c +++ b/net/9p/trans_fd.c @@ -868,6 +868,7 @@ static int p9_socket_open(struct p9_client *client, struct socket *csocket) } csocket->sk->sk_allocation = GFP_NOIO; + csocket->sk->sk_use_task_frag = false; file = sock_alloc_file(csocket, 0, NULL); if (IS_ERR(file)) { pr_err("%s (%d): failed to map fd\n", diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index dfa237fbd5a3..1d06e114ba3f 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -446,6 +446,7 @@ int ceph_tcp_connect(struct ceph_connection *con) if (ret) return ret; sock->sk->sk_allocation = GFP_NOFS; + sock->sk->sk_use_task_frag = false; #ifdef CONFIG_LOCKDEP lockdep_set_class(&sock->sk->sk_lock, &socket_class); diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index c0506d0d7478..aaa5b2741b79 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -1882,6 +1882,7 @@ static int xs_local_finish_connecting(struct rpc_xprt *xprt, sk->sk_write_space = xs_udp_write_space; sk->sk_state_change = xs_local_state_change; sk->sk_error_report = xs_error_report; + sk->sk_use_task_frag = false; xprt_clear_connected(xprt); @@ -2082,6 +2083,7 @@ static void xs_udp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock) sk->sk_user_data = xprt; sk->sk_data_ready = xs_data_ready; sk->sk_write_space = xs_udp_write_space; + sk->sk_use_task_frag = false; xprt_set_connected(xprt); @@ -2249,6 +2251,7 @@ static int xs_tcp_finish_connecting(struct rpc_xprt *xprt, struct socket *sock) sk->sk_state_change = xs_tcp_state_change; sk->sk_write_space = xs_tcp_write_space; sk->sk_error_report = xs_error_report; + sk->sk_use_task_frag = false; /* socket options */ sock_reset_flag(sk, SOCK_LINGER); diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c index d6fece1ed982..74a54295c164 100644 --- a/net/xfrm/espintcp.c +++ b/net/xfrm/espintcp.c @@ -489,6 +489,7 @@ static int espintcp_init_sk(struct sock *sk) /* avoid using task_frag */ sk->sk_allocation = GFP_ATOMIC; + sk->sk_use_task_frag = false; return 0; From patchwork Fri Dec 16 12:45:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin Coddington X-Patchwork-Id: 13074989 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9E8DC4332F for ; Fri, 16 Dec 2022 12:46:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230140AbiLPMqb (ORCPT ); Fri, 16 Dec 2022 07:46:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229665AbiLPMq2 (ORCPT ); Fri, 16 Dec 2022 07:46:28 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 818AA38AA for ; Fri, 16 Dec 2022 04:45:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671194741; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CvwOufv5pEhIhgMEysvlJf2UTI5YhOfiynf6OZftv3I=; b=h3h1mqXxWQm6KhDfVB+XxwoKFUTOZfFLVZoKzDErTtBfrjvECKOJqemdpMUR1NCBwL9Lzq J7CmmJBIa0bk4bB12xpYuNMJZNza0fvfSz18FgsWL4Jd+T0Aew+Oh9z+MpQgLxhRkn6pDB z518zsfh4qccFFgxpvw8zS96S6praPA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-6-qPHV_NjlMjW_IukuuCYGHQ-1; Fri, 16 Dec 2022 07:45:37 -0500 X-MC-Unique: qPHV_NjlMjW_IukuuCYGHQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 41DEC38041D0; Fri, 16 Dec 2022 12:45:36 +0000 (UTC) Received: from bcodding.csb (unknown [10.22.50.3]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1183C40C2064; Fri, 16 Dec 2022 12:45:36 +0000 (UTC) Received: by bcodding.csb (Postfix, from userid 24008) id B390710C30E1; Fri, 16 Dec 2022 07:45:35 -0500 (EST) From: Benjamin Coddington To: Jakub Kicinski , Paolo Abeni , "David S. Miller" , Eric Dumazet Cc: Guillaume Nault , Philipp Reisner , Lars Ellenberg , =?utf-8?q?Christoph_B=C3=B6hmwa?= =?utf-8?q?lder?= , Jens Axboe , Josef Bacik , Keith Busch , Christoph Hellwig , Sagi Grimberg , Lee Duncan , Chris Leech , Mike Christie , "James E.J. Bottomley" , "Martin K. Petersen" , Valentina Manea , Shuah Khan , Greg Kroah-Hartman , David Howells , Marc Dionne , Steve French , Christine Caulfield , David Teigland , Mark Fasheh , Joel Becker , Joseph Qi , Eric Van Hensbergen , Latchesar Ionkov , Dominique Martinet , Ilya Dryomov , Xiubo Li , Chuck Lever , Jeff Layton , Trond Myklebust , Anna Schumaker , Steffen Klassert , Herbert Xu , netdev@vger.kernel.org Subject: [PATCH net v4 3/3] net: simplify sk_page_frag Date: Fri, 16 Dec 2022 07:45:28 -0500 Message-Id: <38d42a20af803cb51178510a64a558bfe9d10ddd.1671194454.git.bcodding@redhat.com> In-Reply-To: References: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Now that in-kernel socket users that may recurse during reclaim have benn converted to sk_use_task_frag = false, we can have sk_page_frag() simply check that value. Signed-off-by: Benjamin Coddington Reviewed-by: Guillaume Nault --- include/net/sock.h | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index fefe1f4abf19..dcd72e6285b2 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -2564,19 +2564,14 @@ static inline void sk_stream_moderate_sndbuf(struct sock *sk) * Both direct reclaim and page faults can nest inside other * socket operations and end up recursing into sk_page_frag() * while it's already in use: explicitly avoid task page_frag - * usage if the caller is potentially doing any of them. - * This assumes that page fault handlers use the GFP_NOFS flags or - * explicitly disable sk_use_task_frag. + * when users disable sk_use_task_frag. * * Return: a per task page_frag if context allows that, * otherwise a per socket one. */ static inline struct page_frag *sk_page_frag(struct sock *sk) { - if (sk->sk_use_task_frag && - (sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | - __GFP_FS)) == - (__GFP_DIRECT_RECLAIM | __GFP_FS)) + if (sk->sk_use_task_frag) return ¤t->task_frag; return &sk->sk_frag;