From patchwork Mon Jun 12 14:38:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hannes Reinecke X-Patchwork-Id: 13276778 X-Patchwork-Delegate: kuba@kernel.org Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CD32156D0 for ; Mon, 12 Jun 2023 14:38:44 +0000 (UTC) Received: from smtp-out1.suse.de (smtp-out1.suse.de [IPv6:2001:67c:2178:6::1c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 70853D3 for ; Mon, 12 Jun 2023 07:38:41 -0700 (PDT) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 257AA22786; Mon, 12 Jun 2023 14:38:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1686580720; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yK9buUVM0czdcP7/5ZxGguslmta3D7Y6EmCnHL4RSAo=; b=azwZ1kervQPmbo3MslkMs71wPbhewxxyqXX3rDTqp34nUVvxOeJklG5PANOH7X2XKj1NN/ CO1eBv/Wi2ODC17jX/y9HEuv8MA6baDNwYTMgtsdxIDPtuFAbKuEeJPby/11ditwbWujDJ LfCIQSxC59ImH8EGNXWzEcjq37KILvI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1686580720; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yK9buUVM0czdcP7/5ZxGguslmta3D7Y6EmCnHL4RSAo=; b=jBEq+yohQ/Q613JjR2cTlV+R1KCrggRGVF/9WZRrm9wYr7xf8P3huSKrG9pIiOcIG42VRI J0qG+/6Ttp0jZiBw== Received: from adalid.arch.suse.de (adalid.arch.suse.de [10.161.8.13]) by relay2.suse.de (Postfix) with ESMTP id DCF302C142; Mon, 12 Jun 2023 14:38:39 +0000 (UTC) Received: by adalid.arch.suse.de (Postfix, from userid 16045) id C5C9F51C4CB9; Mon, 12 Jun 2023 16:38:39 +0200 (CEST) From: Hannes Reinecke To: Christoph Hellwig Cc: Sagi Grimberg , Keith Busch , linux-nvme@lists.infradead.org, Jakub Kicinski , netdev@vger.kernel.org, Hannes Reinecke Subject: [PATCH 1/4] net/tls: handle MSG_EOR for tls_sw TX flow Date: Mon, 12 Jun 2023 16:38:30 +0200 Message-Id: <20230612143833.70805-2-hare@suse.de> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230612143833.70805-1-hare@suse.de> References: <20230612143833.70805-1-hare@suse.de> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net X-Patchwork-Delegate: kuba@kernel.org tls_sw_sendmsg() / tls_do_sw_sendpage() already handles MSG_MORE / MSG_SENDPAGE_NOTLAST, but bails out on MSG_EOR. But seeing that MSG_EOR is basically the opposite of MSG_MORE / MSG_SENDPAGE_NOTLAST this patch adds handling MSG_EOR by treating it as the negation of MSG_MORE. And erroring out if MSG_EOR is specified with either MSG_MORE or MSG_SENDPAGE_NOTLAST. Cc: Jakub Kicinski Cc: netdev@vger.kernel.org Signed-off-by: Hannes Reinecke --- net/tls/tls_sw.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c index 635b8bf6b937..16eae0c5c819 100644 --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -953,7 +953,10 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) int pending; if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | - MSG_CMSG_COMPAT)) + MSG_EOR | MSG_CMSG_COMPAT)) + return -EOPNOTSUPP; + + if (!eor && msg->msg_flags & MSG_EOR) return -EOPNOTSUPP; ret = mutex_lock_interruptible(&tls_ctx->tx_lock); @@ -1274,11 +1277,15 @@ static int tls_sw_do_sendpage(struct sock *sk, struct page *page, int tls_sw_sendpage_locked(struct sock *sk, struct page *page, int offset, size_t size, int flags) { - if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | + if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | MSG_EOR | MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY | MSG_NO_SHARED_FRAGS)) return -EOPNOTSUPP; + if ((flags & (MSG_MORE | MSG_SENDPAGE_NOTLAST)) && + (flags & MSG_EOR)) + return -EINVAL; + return tls_sw_do_sendpage(sk, page, offset, size, flags); } @@ -1288,10 +1295,14 @@ int tls_sw_sendpage(struct sock *sk, struct page *page, struct tls_context *tls_ctx = tls_get_ctx(sk); int ret; - if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | + if (flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL | MSG_EOR | MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY)) return -EOPNOTSUPP; + if ((flags & (MSG_MORE | MSG_SENDPAGE_NOTLAST)) && + (flags & MSG_EOR)) + return -EOPNOTSUPP; + ret = mutex_lock_interruptible(&tls_ctx->tx_lock); if (ret) return ret;