From patchwork Wed Feb 28 22:43:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sabrina Dubroca X-Patchwork-Id: 13576023 X-Patchwork-Delegate: kuba@kernel.org Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C41D571EAA for ; Wed, 28 Feb 2024 22:44:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.70.183.200 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709160261; cv=none; b=U3dF0WMnJqhfKxUuwZOEqdFkdJdTuzyBea1etwiEPeHuErdPWD6pW+NeCHrxy7fQUJg4AapR/il9qdHK7WowdD7bs6HX92dtog5X01Eb5GhzYpxg8pgZObjHfiqx+brHKUvsC5+KyP5vvFeoxqkNRzp8cYkqO7PgMdG98C2sHnU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709160261; c=relaxed/simple; bh=ZsxQLJQrZ8uDB93uv1ZDKmBLdOCzLKwtRoRXtVRcgP8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PY0ONi+cuAoxtiH1BylN2G/gjAganKrD9zQ4xkevrw/srRkFejAO3GUZK4q7PnGVVH5n5Qp3SgMOlLnnMLg3O4nudc9thdx+2zyYPr3qzAE9wmFsT/NoIl+TQti8wfPc2zllxGRDzIY6OfQv3PHoHzJl6nayzWKhEOAa4aRR+aU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=queasysnail.net; spf=none smtp.mailfrom=queasysnail.net; arc=none smtp.client-ip=217.70.183.200 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=queasysnail.net Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=queasysnail.net Received: by mail.gandi.net (Postfix) with ESMTPSA id 2F6F920005; Wed, 28 Feb 2024 22:44:10 +0000 (UTC) From: Sabrina Dubroca To: netdev@vger.kernel.org Cc: Sabrina Dubroca , Vakul Garg , Boris Pismenny , John Fastabend , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman Subject: [PATCH net 1/4] tls: decrement decrypt_pending if no async completion will be called Date: Wed, 28 Feb 2024 23:43:57 +0100 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-GND-Sasl: sd@queasysnail.net X-Patchwork-Delegate: kuba@kernel.org With mixed sync/async decryption, or failures of crypto_aead_decrypt, we increment decrypt_pending but we never do the corresponding decrement since tls_decrypt_done will not be called. In this case, we should decrement decrypt_pending immediately to avoid getting stuck. For example, the prequeue prequeue test gets stuck with mixed modes (one async decrypt + one sync decrypt). Fixes: 94524d8fc965 ("net/tls: Add support for async decryption of tls records") Signed-off-by: Sabrina Dubroca --- net/tls/tls_sw.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index de96959336c4..9f23ba321efe 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -289,6 +289,8 @@ static int tls_do_decryption(struct sock *sk, return 0; ret = crypto_wait_req(ret, &ctx->async_wait); + } else if (darg->async) { + atomic_dec(&ctx->decrypt_pending); } darg->async = false; From patchwork Wed Feb 28 22:43:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sabrina Dubroca X-Patchwork-Id: 13576024 X-Patchwork-Delegate: kuba@kernel.org Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7954E72901 for ; Wed, 28 Feb 2024 22:44:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.70.183.200 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709160261; cv=none; b=oychRo8Zco2D1CWjw7OBRaf9TG3lgNBfPNbyios3+xmYm/7szHnC3KEWykLZg39U9jVurmcs2DFv5+oeWtPwbvy744XXkZVxKQyM//ujG6skZDPhfuTAJ+uA4UXqhrGF3qBwPFM/sNvL4p13Wdl2JWpMFGwr+w5e8dV77IgHXXI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709160261; c=relaxed/simple; bh=UHWi0Zi6PWMvl+XFy9eHFvordok3YbwyPLONXU3FyCI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hjRXFTPzXKSkAOfrzaL9QcRj1EzEKGgp9tAg596GlPQQEy697zL01OOm5PnZsi/qOUbDZ6DSg1jeZCxkLkKcTZzwhNOw1fZBaJW/J95oOuXRl++Yb8aL6qdTYgqcICFdniLXCtC+n8GfW6HMrwMw48ZTYYp0nczhpnotz/Po7R4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=queasysnail.net; spf=none smtp.mailfrom=queasysnail.net; arc=none smtp.client-ip=217.70.183.200 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=queasysnail.net Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=queasysnail.net Received: by mail.gandi.net (Postfix) with ESMTPSA id 61E0320007; Wed, 28 Feb 2024 22:44:11 +0000 (UTC) From: Sabrina Dubroca To: netdev@vger.kernel.org Cc: Sabrina Dubroca , Vakul Garg , Boris Pismenny , John Fastabend , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman Subject: [PATCH net 2/4] tls: fix peeking with sync+async decryption Date: Wed, 28 Feb 2024 23:43:58 +0100 Message-ID: <1b132d2b2b99296bfde54e8a67672d90d6d16e71.1709132643.git.sd@queasysnail.net> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-GND-Sasl: sd@queasysnail.net X-Patchwork-Delegate: kuba@kernel.org If we peek from 2 records with a currently empty rx_list, and the first record is decrypted synchronously but the second record is decrypted async, the following happens: 1. decrypt record 1 (sync) 2. copy from record 1 to the userspace's msg 3. queue the decrypted record to rx_list for future read(!PEEK) 4. decrypt record 2 (async) 5. queue record 2 to rx_list 6. call process_rx_list to copy data from the 2nd record We currently pass copied=0 as skip offset to process_rx_list, so we end up copying once again from the first record. We should skip over the data we've already copied. Seen with selftest tls.12_aes_gcm.recv_peek_large_buf_mult_recs Fixes: 692d7b5d1f91 ("tls: Fix recvmsg() to be able to peek across multiple records") Signed-off-by: Sabrina Dubroca --- I'm not very happy with this, because the logic is already hard to follow and I'm adding yet another variable counting how many bytes we've handled, but everything else I tried broke at least one test case :( I'll see if I can rework this for net-next. net/tls/tls_sw.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 9f23ba321efe..1394fc44f378 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -1950,6 +1950,7 @@ int tls_sw_recvmsg(struct sock *sk, struct strp_msg *rxm; struct tls_msg *tlm; ssize_t copied = 0; + ssize_t peeked = 0; bool async = false; int target, err; bool is_kvec = iov_iter_is_kvec(&msg->msg_iter); @@ -2097,8 +2098,10 @@ int tls_sw_recvmsg(struct sock *sk, if (err < 0) goto put_on_rx_list_err; - if (is_peek) + if (is_peek) { + peeked += chunk; goto put_on_rx_list; + } if (partially_consumed) { rxm->offset += chunk; @@ -2137,8 +2140,8 @@ int tls_sw_recvmsg(struct sock *sk, /* Drain records from the rx_list & copy if required */ if (is_peek || is_kvec) - err = process_rx_list(ctx, msg, &control, copied, - decrypted, is_peek, NULL); + err = process_rx_list(ctx, msg, &control, copied + peeked, + decrypted - peeked, is_peek, NULL); else err = process_rx_list(ctx, msg, &control, 0, async_copy_bytes, is_peek, NULL); From patchwork Wed Feb 28 22:43:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sabrina Dubroca X-Patchwork-Id: 13576025 X-Patchwork-Delegate: kuba@kernel.org Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A021772912 for ; Wed, 28 Feb 2024 22:44:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.70.183.200 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709160262; cv=none; b=DGHQlk+GwzIz9ofzr6fSiTAppH2rOB2yF2ZhgG+FY48BADIgLrqgJCi23rIuPyCsDuIMUDJcbsNd12b0PFLNIb2J72cIb/gtBL1osgsYDVrhxZGkguGLA/sdXaOzBfS5oVU8/+OdhuJasEM5r/asSnfpjJQyRTtiuYNYYSgjRq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709160262; c=relaxed/simple; bh=Ivv/VMkheM85qHuo7+Ni2+JWwSFOptDdbD1BWvadJrk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aruxK8rLLSYG6o2TmfR+XWLPgOs+ktc2xAZuCpuvAfNT+b1zCFBOHrkeFipHvsLXTnY8KikG8w5bHGwF0Js85u94z3fF/tXw3xqejxznHcKeIwB55EqjSRsTatULfgGboMZ4BDNfnbphJLkDdMzhXevTzXrp+kplM3DCewppV38= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=queasysnail.net; spf=none smtp.mailfrom=queasysnail.net; arc=none smtp.client-ip=217.70.183.200 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=queasysnail.net Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=queasysnail.net Received: by mail.gandi.net (Postfix) with ESMTPSA id 8F18B20008; Wed, 28 Feb 2024 22:44:12 +0000 (UTC) From: Sabrina Dubroca To: netdev@vger.kernel.org Cc: Sabrina Dubroca , Vakul Garg , Boris Pismenny , John Fastabend , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman Subject: [PATCH net 3/4] tls: separate no-async decryption request handling from async Date: Wed, 28 Feb 2024 23:43:59 +0100 Message-ID: <47bde5f649707610eaef9f0d679519966fc31061.1709132643.git.sd@queasysnail.net> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-GND-Sasl: sd@queasysnail.net X-Patchwork-Delegate: kuba@kernel.org If we're not doing async, the handling is much simpler. There's no reference counting, we just need to wait for the completion to wake us up and return its result. We should preferably also use a separate crypto_wait. I'm not seeing a UAF as I did in the past, I think aec7961916f3 ("tls: fix race between async notify and socket close") took care of it. This will make the next fix easier. Signed-off-by: Sabrina Dubroca --- net/tls/tls_sw.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 1394fc44f378..1fd37fe13ffd 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -274,9 +274,15 @@ static int tls_do_decryption(struct sock *sk, DEBUG_NET_WARN_ON_ONCE(atomic_read(&ctx->decrypt_pending) < 1); atomic_inc(&ctx->decrypt_pending); } else { + DECLARE_CRYPTO_WAIT(wait); + aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG, - crypto_req_done, &ctx->async_wait); + crypto_req_done, &wait); + ret = crypto_aead_decrypt(aead_req); + if (ret == -EINPROGRESS || ret == -EBUSY) + ret = crypto_wait_req(ret, &wait); + return ret; } ret = crypto_aead_decrypt(aead_req); @@ -285,10 +291,7 @@ static int tls_do_decryption(struct sock *sk, ret = ret ?: -EINPROGRESS; } if (ret == -EINPROGRESS) { - if (darg->async) - return 0; - - ret = crypto_wait_req(ret, &ctx->async_wait); + return 0; } else if (darg->async) { atomic_dec(&ctx->decrypt_pending); } From patchwork Wed Feb 28 22:44:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sabrina Dubroca X-Patchwork-Id: 13576026 X-Patchwork-Delegate: kuba@kernel.org Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42CFA72901 for ; Wed, 28 Feb 2024 22:44:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.70.183.200 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709160264; cv=none; b=mPD0GqfSRAIGQWWp4jgzMqSyMqdKhe0p5qh50BHXi+z9YNNcUhZHQKLoYLPaxZdjNpx6tTpr/gvX7MYOa5DgplpGU3l59ONb2xP9ytLkYGYbp6cKS0o34kktSgrTMJtXQ9v4g9cNZOSoN+8FOlSAGaOMtwFDSp9kOKjJXpEA4pA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709160264; c=relaxed/simple; bh=9ANgazVXWP2XssNvo+LtG4lvng3Sqtk5tBbhNztdISU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dz7r6uWGdSJu6LcQdDHWPcopLerczqCF4mmnBiVn1BiomFtfJzN74If/5G3/fDMNdOQQoBSuC5jHReNtoTalAJeSfF6quHGPIVcIaIPfMEUlY4IS5aDF1z+zmWJnvOnw+8MD9L/f0ywut5tCZGNW2xm17NcJFSSF4pzOMgBqMFo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=queasysnail.net; spf=none smtp.mailfrom=queasysnail.net; arc=none smtp.client-ip=217.70.183.200 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=queasysnail.net Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=queasysnail.net Received: by mail.gandi.net (Postfix) with ESMTPSA id AA9A220003; Wed, 28 Feb 2024 22:44:13 +0000 (UTC) From: Sabrina Dubroca To: netdev@vger.kernel.org Cc: Sabrina Dubroca , Vakul Garg , Boris Pismenny , John Fastabend , Jakub Kicinski , "David S. Miller" , Eric Dumazet , Paolo Abeni , Simon Horman Subject: [PATCH net 4/4] tls: fix use-after-free on failed backlog decryption Date: Wed, 28 Feb 2024 23:44:00 +0100 Message-ID: <4755dd8d9bebdefaa19ce1439b833d6199d4364c.1709132643.git.sd@queasysnail.net> X-Mailer: git-send-email 2.43.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-GND-Sasl: sd@queasysnail.net X-Patchwork-Delegate: kuba@kernel.org When the decrypt request goes to the backlog and crypto_aead_decrypt returns -EBUSY, tls_do_decryption will wait until all async decryptions have completed. If one of them fails, tls_do_decryption will return -EBADMSG and tls_decrypt_sg jumps to the error path, releasing all the pages. But the pages have been passed to the async callback, and have already been released by tls_decrypt_done. The only true async case is when crypto_aead_decrypt returns -EINPROGRESS. With -EBUSY, we already waited so we can tell tls_sw_recvmsg that the data is available for immediate copy, but we need to notify tls_decrypt_sg (via the new ->async_done flag) that the memory has already been released. Fixes: 859054147318 ("net: tls: handle backlogging of crypto requests") Signed-off-by: Sabrina Dubroca --- net/tls/tls_sw.c | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 1fd37fe13ffd..211f57164cb6 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -52,6 +52,7 @@ struct tls_decrypt_arg { struct_group(inargs, bool zc; bool async; + bool async_done; u8 tail; ); @@ -286,15 +287,18 @@ static int tls_do_decryption(struct sock *sk, } ret = crypto_aead_decrypt(aead_req); + if (ret == -EINPROGRESS) + return 0; + if (ret == -EBUSY) { ret = tls_decrypt_async_wait(ctx); - ret = ret ?: -EINPROGRESS; - } - if (ret == -EINPROGRESS) { - return 0; - } else if (darg->async) { - atomic_dec(&ctx->decrypt_pending); + darg->async_done = true; + /* all completions have run, we're not doing async anymore */ + darg->async = false; + return ret; } + + atomic_dec(&ctx->decrypt_pending); darg->async = false; return ret; @@ -1593,8 +1597,11 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov, /* Prepare and submit AEAD request */ err = tls_do_decryption(sk, sgin, sgout, dctx->iv, data_len + prot->tail_size, aead_req, darg); - if (err) + if (err) { + if (darg->async_done) + goto exit_free_skb; goto exit_free_pages; + } darg->skb = clear_skb ?: tls_strp_msg(ctx); clear_skb = NULL; @@ -1606,6 +1613,9 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov, return err; } + if (unlikely(darg->async_done)) + return 0; + if (prot->tail_size) darg->tail = dctx->tail;