From patchwork Mon Sep 25 12:02:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397707 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FB9BCE7A81 for ; Mon, 25 Sep 2023 12:04:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231336AbjIYME5 (ORCPT ); Mon, 25 Sep 2023 08:04:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231172AbjIYMEz (ORCPT ); Mon, 25 Sep 2023 08:04:55 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8BE3DF for ; Mon, 25 Sep 2023 05:03:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643403; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LZvLwoc939E6B4TZcwRPSf2buY4OlPtMZfRg4M/9mP0=; b=b0zXVUb+65Kuha2teQXxQOqLyED4k2Iht9J33gbtigIrpUYPT2yl5w3DlnfUtHvmm0cOMd 2mNC+8ONqB7Qf0U+vyFXe5sbwjGMNgVojEG62xacWrvUGKcFyiIPW2crypwbGuPjhadMJE sNj1KrItHfzFNaD/ktE9GyPK0FjKCCM= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-564-j_NYb_xaP4-1I6Rd3LFSKw-1; Mon, 25 Sep 2023 08:03:19 -0400 X-MC-Unique: j_NYb_xaP4-1I6Rd3LFSKw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 50026382135D; Mon, 25 Sep 2023 12:03:18 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id A718C51E3; Mon, 25 Sep 2023 12:03:16 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 01/12] iov_iter: Remove last_offset from iov_iter as it was for ITER_PIPE Date: Mon, 25 Sep 2023 13:02:58 +0100 Message-ID: <20230925120309.1731676-2-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Now that ITER_PIPE has been removed, iov_iter::last_offset is no longer used, so remove it. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/uio.h | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 42bce38a8e87..2000e42a6586 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -44,10 +44,7 @@ struct iov_iter { bool nofault; bool data_source; bool user_backed; - union { - size_t iov_offset; - int last_offset; - }; + size_t iov_offset; /* * Hack alert: overlay ubuf_iovec with iovec + count, so * that the members resolve correctly regardless of the type From patchwork Mon Sep 25 12:02:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9436CE7A94 for ; Mon, 25 Sep 2023 12:04:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231293AbjIYMEy (ORCPT ); Mon, 25 Sep 2023 08:04:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231172AbjIYMEy (ORCPT ); Mon, 25 Sep 2023 08:04:54 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 371D7103 for ; Mon, 25 Sep 2023 05:03:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643406; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=psu53B47Zw+z+D2u7wWqQ/a6N4tr+aS4fkyxU8KLlHI=; b=cF3oWoPeVorTjoEVmX12vPPgDxAvmVdlohZJzZCioAV5pbNUrSKTJNgfzh4Rk1db1Bmxk3 KSsOPqHPmIngvXe4GKdZ1v6W957ST2yVtfzaVfvqMSx0R+mickOWwE0d2sgVLpDnTnh0Vv x2ygkr9ceeRT5mPdlaQKPWwMgeYpolI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-396-94it2JUUPB6shaAaGHHbuA-1; Mon, 25 Sep 2023 08:03:22 -0400 X-MC-Unique: 94it2JUUPB6shaAaGHHbuA-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CE6CF8039D1; Mon, 25 Sep 2023 12:03:21 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0E5B2711293; Mon, 25 Sep 2023 12:03:18 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Dan Williams , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , x86@kernel.org Subject: [PATCH v7 02/12] iov_iter, x86: Be consistent about the __user tag on copy_mc_to_user() Date: Mon, 25 Sep 2023 13:02:59 +0100 Message-ID: <20230925120309.1731676-3-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org copy_mc_to_user() has the destination marked __user on powerpc, but not on x86; the latter results in a sparse warning in lib/iov_iter.c. Fix this by applying the tag on x86 too. Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()") Signed-off-by: David Howells cc: Dan Williams cc: Thomas Gleixner cc: Ingo Molnar cc: Borislav Petkov cc: Dave Hansen cc: "H. Peter Anvin" cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: x86@kernel.org cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Acked-by: Borislav Petkov (AMD) --- arch/x86/include/asm/uaccess.h | 2 +- arch/x86/lib/copy_mc.c | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 8bae40a66282..5c367c1290c3 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -496,7 +496,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); #define copy_mc_to_kernel copy_mc_to_kernel unsigned long __must_check -copy_mc_to_user(void *to, const void *from, unsigned len); +copy_mc_to_user(void __user *to, const void *from, unsigned len); #endif /* diff --git a/arch/x86/lib/copy_mc.c b/arch/x86/lib/copy_mc.c index 80efd45a7761..6e8b7e600def 100644 --- a/arch/x86/lib/copy_mc.c +++ b/arch/x86/lib/copy_mc.c @@ -70,23 +70,23 @@ unsigned long __must_check copy_mc_to_kernel(void *dst, const void *src, unsigne } EXPORT_SYMBOL_GPL(copy_mc_to_kernel); -unsigned long __must_check copy_mc_to_user(void *dst, const void *src, unsigned len) +unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, unsigned len) { unsigned long ret; if (copy_mc_fragile_enabled) { __uaccess_begin(); - ret = copy_mc_fragile(dst, src, len); + ret = copy_mc_fragile((__force void *)dst, src, len); __uaccess_end(); return ret; } if (static_cpu_has(X86_FEATURE_ERMS)) { __uaccess_begin(); - ret = copy_mc_enhanced_fast_string(dst, src, len); + ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len); __uaccess_end(); return ret; } - return copy_user_generic(dst, src, len); + return copy_user_generic((__force void *)dst, src, len); } From patchwork Mon Sep 25 12:03:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50051CE7A81 for ; Mon, 25 Sep 2023 12:04:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231131AbjIYMEu (ORCPT ); Mon, 25 Sep 2023 08:04:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230470AbjIYMEs (ORCPT ); Mon, 25 Sep 2023 08:04:48 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02E1310A for ; Mon, 25 Sep 2023 05:03:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643411; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7BA8QLNemiJIXUC+XXC0PAivfRO0j2uaFr7AQHg4nPM=; b=JXJUy0CmU1JMvpcDED3gVq82J7mHouO3EkN7ZvPmf5LCd6B68w+jFcpcuzT1QEQr+GxYzF SWlPOro67Fcqkc7Pw8Yrvxf+Gx7+Ut3082Jzs1358DAOf3EK+yEGj+XRiMpla8oiX+/BDV XRg2MFrUTJ6auWmyo51j3vcobARgukQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-443-3j0u1djBOj-K7pheSFVopw-1; Mon, 25 Sep 2023 08:03:26 -0400 X-MC-Unique: 3j0u1djBOj-K7pheSFVopw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0B212811E88; Mon, 25 Sep 2023 12:03:25 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7101940C2064; Mon, 25 Sep 2023 12:03:22 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Jaroslav Kysela , Takashi Iwai , Oswald Buddenhagen , Suren Baghdasaryan , Kuninori Morimoto , alsa-devel@alsa-project.org Subject: [PATCH v7 03/12] sound: Fix snd_pcm_readv()/writev() to use iov access functions Date: Mon, 25 Sep 2023 13:03:00 +0100 Message-ID: <20230925120309.1731676-4-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Fix snd_pcm_readv()/writev() to use iov access functions rather than poking at the iov_iter internals directly. Signed-off-by: David Howells Reviewed-by: Jaroslav Kysela Reviewed-by: Takashi Iwai cc: Oswald Buddenhagen cc: Jens Axboe cc: Suren Baghdasaryan cc: Kuninori Morimoto cc: alsa-devel@alsa-project.org --- sound/core/pcm_native.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index bd9ddf412b46..9a69236fa207 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -3527,7 +3527,7 @@ static ssize_t snd_pcm_readv(struct kiocb *iocb, struct iov_iter *to) if (runtime->state == SNDRV_PCM_STATE_OPEN || runtime->state == SNDRV_PCM_STATE_DISCONNECTED) return -EBADFD; - if (!to->user_backed) + if (!user_backed_iter(to)) return -EINVAL; if (to->nr_segs > 1024 || to->nr_segs != runtime->channels) return -EINVAL; @@ -3567,7 +3567,7 @@ static ssize_t snd_pcm_writev(struct kiocb *iocb, struct iov_iter *from) if (runtime->state == SNDRV_PCM_STATE_OPEN || runtime->state == SNDRV_PCM_STATE_DISCONNECTED) return -EBADFD; - if (!from->user_backed) + if (!user_backed_iter(from)) return -EINVAL; if (from->nr_segs > 128 || from->nr_segs != runtime->channels || !frame_aligned(runtime, iov->iov_len)) From patchwork Mon Sep 25 12:03:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397708 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC62BCE7A95 for ; Mon, 25 Sep 2023 12:05:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231430AbjIYMFJ (ORCPT ); Mon, 25 Sep 2023 08:05:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231366AbjIYMFF (ORCPT ); Mon, 25 Sep 2023 08:05:05 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C23C210D for ; Mon, 25 Sep 2023 05:03:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643411; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=j/K7bby96tUMA/iC07lWusXqTlhgR9zlJgJ7++h0ZQI=; b=D7xWpvJzU2dieu6MANtMV21oNUhIH+An7pAgX67s4qwHrSdfDfnd9pHiNcMV/iwqFUtLGD 2gI8A4KBmU5KgvoDLnUMYQIGJsOtHeZX0Ig9hO0oAB+Xkbj0d2av7cBkSoihv4vq9siVBY l/VqUDZAQ0vNcbWxnzZe01BnOo7CD+Q= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-322-N5vnkQuMM-ymsL1ps_d4Kg-1; Mon, 25 Sep 2023 08:03:29 -0400 X-MC-Unique: N5vnkQuMM-ymsL1ps_d4Kg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AC2E7802C1A; Mon, 25 Sep 2023 12:03:28 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id BB2892156701; Mon, 25 Sep 2023 12:03:25 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Dennis Dalessandro , Jason Gunthorpe , Leon Romanovsky , linux-rdma@vger.kernel.org Subject: [PATCH v7 04/12] infiniband: Use user_backed_iter() to see if iterator is UBUF/IOVEC Date: Mon, 25 Sep 2023 13:03:01 +0100 Message-ID: <20230925120309.1731676-5-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Use user_backed_iter() to see if iterator is UBUF/IOVEC rather than poking inside the iterator. Signed-off-by: David Howells cc: Dennis Dalessandro cc: Jason Gunthorpe cc: Leon Romanovsky cc: linux-rdma@vger.kernel.org --- drivers/infiniband/hw/hfi1/file_ops.c | 2 +- drivers/infiniband/hw/qib/qib_file_ops.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c index a5ab22cedd41..788fc249234f 100644 --- a/drivers/infiniband/hw/hfi1/file_ops.c +++ b/drivers/infiniband/hw/hfi1/file_ops.c @@ -267,7 +267,7 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from) if (!HFI1_CAP_IS_KSET(SDMA)) return -EINVAL; - if (!from->user_backed) + if (!user_backed_iter(from)) return -EINVAL; idx = srcu_read_lock(&fd->pq_srcu); pq = srcu_dereference(fd->pq, &fd->pq_srcu); diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c index 152952127f13..29e4c59aa23b 100644 --- a/drivers/infiniband/hw/qib/qib_file_ops.c +++ b/drivers/infiniband/hw/qib/qib_file_ops.c @@ -2244,7 +2244,7 @@ static ssize_t qib_write_iter(struct kiocb *iocb, struct iov_iter *from) struct qib_ctxtdata *rcd = ctxt_fp(iocb->ki_filp); struct qib_user_sdma_queue *pq = fp->pq; - if (!from->user_backed || !from->nr_segs || !pq) + if (!user_backed_iter(from) || !from->nr_segs || !pq) return -EINVAL; return qib_user_sdma_writev(rcd, pq, iter_iov(from), from->nr_segs); From patchwork Mon Sep 25 12:03:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397709 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE9BDCE7A94 for ; Mon, 25 Sep 2023 12:05:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231366AbjIYMFb (ORCPT ); Mon, 25 Sep 2023 08:05:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231439AbjIYMFK (ORCPT ); Mon, 25 Sep 2023 08:05:10 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD4D310B for ; Mon, 25 Sep 2023 05:03:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643415; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oQjxv/qWpzFGTlwOWGQlwUCcgnofDI2I99l4qJgz63Y=; b=RWzMWhiDR/6Y4XDNzlO3NIhL5wNvFQxGNetqxFkvFe9wOJusFsutXSiV80HavkfnTkTwZS TlKj6keQ1tBq6TM9gBj5NxPcrbeiUwgxUXmbQmuutD+Ux5a/bLVE4Ncoh5xQjNMpplJEKl g8FjtPwGnTJ9VPlCWd8WAAa5DhVdyVE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-269-DNPjLMOqM3-blp0lhNFq-g-1; Mon, 25 Sep 2023 08:03:31 -0400 X-MC-Unique: DNPjLMOqM3-blp0lhNFq-g-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EB143101AA42; Mon, 25 Sep 2023 12:03:30 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D914711293; Mon, 25 Sep 2023 12:03:29 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 05/12] iov_iter: Renumber ITER_* constants Date: Mon, 25 Sep 2023 13:03:02 +0100 Message-ID: <20230925120309.1731676-6-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Renumber the ITER_* iterator-type constants to put things in the same order as in the iteration functions and to group user-backed iterators at the bottom. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/uio.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 2000e42a6586..bef8e56aa45c 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -21,12 +21,12 @@ struct kvec { enum iter_type { /* iter types */ + ITER_UBUF, ITER_IOVEC, - ITER_KVEC, ITER_BVEC, + ITER_KVEC, ITER_XARRAY, ITER_DISCARD, - ITER_UBUF, }; #define ITER_SOURCE 1 // == WRITE From patchwork Mon Sep 25 12:03:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E1FFCE7A95 for ; Mon, 25 Sep 2023 12:05:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231382AbjIYMFw (ORCPT ); Mon, 25 Sep 2023 08:05:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56548 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231579AbjIYMFi (ORCPT ); Mon, 25 Sep 2023 08:05:38 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B537120 for ; Mon, 25 Sep 2023 05:03:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643418; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4obcSyEH8f0LI+hqbulIObwIk6RPpIYigX0Ml5LHEtM=; b=GXRJbgGHZ8xnrpjd2hrIGcXIS6ozcnx87Q7lgZfyyWozQ6df9nUjdkD66sP5erkeS14ldI 9G1bgwMJx5Ik+bDTKQxby5KOcA5xTopVj05PBpJC0sinqf+dRV4PgWgRe8qWisn1fBQ+I8 KrpZhNv8/ZkdbXAUH/RHjZAPsTIbMk4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-136-cDj4weHpMsW5nB2aWNtwwg-1; Mon, 25 Sep 2023 08:03:34 -0400 X-MC-Unique: cDj4weHpMsW5nB2aWNtwwg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4F33E800B35; Mon, 25 Sep 2023 12:03:33 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id A8F8551E3; Mon, 25 Sep 2023 12:03:31 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 06/12] iov_iter: Derive user-backedness from the iterator type Date: Mon, 25 Sep 2023 13:03:03 +0100 Message-ID: <20230925120309.1731676-7-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Use the iterator type to determine whether an iterator is user-backed or not rather than using a special flag for it. Now that ITER_UBUF and ITER_IOVEC are 0 and 1, they can be checked with a single comparison. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/uio.h | 4 +--- lib/iov_iter.c | 1 - 2 files changed, 1 insertion(+), 4 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index bef8e56aa45c..65d9143f83c8 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -43,7 +43,6 @@ struct iov_iter { bool copy_mc; bool nofault; bool data_source; - bool user_backed; size_t iov_offset; /* * Hack alert: overlay ubuf_iovec with iovec + count, so @@ -140,7 +139,7 @@ static inline unsigned char iov_iter_rw(const struct iov_iter *i) static inline bool user_backed_iter(const struct iov_iter *i) { - return i->user_backed; + return iter_is_ubuf(i) || iter_is_iovec(i); } /* @@ -380,7 +379,6 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction, *i = (struct iov_iter) { .iter_type = ITER_UBUF, .copy_mc = false, - .user_backed = true, .data_source = direction, .ubuf = buf, .count = count, diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 27234a820eeb..227c9f536b94 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -290,7 +290,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction, .iter_type = ITER_IOVEC, .copy_mc = false, .nofault = false, - .user_backed = true, .data_source = direction, .__iov = iov, .nr_segs = nr_segs, From patchwork Mon Sep 25 12:03:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397712 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE2F5CE7A81 for ; Mon, 25 Sep 2023 12:05:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231495AbjIYMF5 (ORCPT ); Mon, 25 Sep 2023 08:05:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231496AbjIYMFk (ORCPT ); Mon, 25 Sep 2023 08:05:40 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE84611F for ; Mon, 25 Sep 2023 05:03:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643422; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=saXGiomC5bnv9RAKI0XpVlyY0MbcRBlM5OC3bKVXk+c=; b=Ygst0CZUgP/YmwOyKANNACjnXtsKAsSraZT45vu5pFplekFm8/1K5nUi2j0hfXsDAmyN2O RWWDOucvw7jZBXPMP3XXjso8ZsL9hudSvpSppQZQWXBHTWJbQFG45t5wIJ9s7wjEhQ9X6e +hSEr7XFWtXxRi83OC9UCeVWU+qC9V8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-624-5kZIkTjKNJaXPLuiWr8tCQ-1; Mon, 25 Sep 2023 08:03:36 -0400 X-MC-Unique: 5kZIkTjKNJaXPLuiWr8tCQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C196D18312CF; Mon, 25 Sep 2023 12:03:35 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id E01C240C6EA8; Mon, 25 Sep 2023 12:03:33 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 07/12] iov_iter: Convert iterate*() to inline funcs Date: Mon, 25 Sep 2023 13:03:04 +0100 Message-ID: <20230925120309.1731676-8-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Convert the iov_iter iteration macros to inline functions to make the code easier to follow. The functions are marked __always_inline as we don't want to end up with indirect calls in the code. This, however, leaves dealing with ->copy_mc in an awkard situation since the step function (memcpy_from_iter_mc()) needs to test the flag in the iterator, but isn't passed the iterator. This will be dealt with in a follow-up patch. The variable names in the per-type iterator functions have been harmonised as much as possible and made clearer as to the variable purpose. The iterator functions are also moved to a header file so that other operations that need to scan over an iterator can be added. For instance, the rbd driver could use this to scan a buffer to see if it is all zeros and libceph could use this to generate a crc. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Link: https://lore.kernel.org/r/3710261.1691764329@warthog.procyon.org.uk/ # v1 Link: https://lore.kernel.org/r/855.1692047347@warthog.procyon.org.uk/ # v2 Link: https://lore.kernel.org/r/20230816120741.534415-1-dhowells@redhat.com/ # v3 Reviewed-by: David Howells Tested-by: David Howells --- Notes: Changes ======= ver #5) - Merge in patch to move iteration framework to a header file. - Move "iter->count - progress" into individual iteration subfunctions. include/linux/iov_iter.h | 274 ++++++++++++++++++++++++++ lib/iov_iter.c | 416 ++++++++++++++++----------------------- 2 files changed, 449 insertions(+), 241 deletions(-) create mode 100644 include/linux/iov_iter.h diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h new file mode 100644 index 000000000000..270454a6703d --- /dev/null +++ b/include/linux/iov_iter.h @@ -0,0 +1,274 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* I/O iterator iteration building functions. + * + * Copyright (C) 2023 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#ifndef _LINUX_IOV_ITER_H +#define _LINUX_IOV_ITER_H + +#include +#include + +typedef size_t (*iov_step_f)(void *iter_base, size_t progress, size_t len, + void *priv, void *priv2); +typedef size_t (*iov_ustep_f)(void __user *iter_base, size_t progress, size_t len, + void *priv, void *priv2); + +/* + * Handle ITER_UBUF. + */ +static __always_inline +size_t iterate_ubuf(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_ustep_f step) +{ + void __user *base = iter->ubuf; + size_t progress = 0, remain; + + remain = step(base + iter->iov_offset, 0, len, priv, priv2); + progress = len - remain; + iter->iov_offset += progress; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_IOVEC. + */ +static __always_inline +size_t iterate_iovec(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_ustep_f step) +{ + const struct iovec *p = iter->__iov; + size_t progress = 0, skip = iter->iov_offset; + + do { + size_t remain, consumed; + size_t part = min(len, p->iov_len - skip); + + if (likely(part)) { + remain = step(p->iov_base + skip, progress, part, priv, priv2); + consumed = part - remain; + progress += consumed; + skip += consumed; + len -= consumed; + if (skip < p->iov_len) + break; + } + p++; + skip = 0; + } while (len); + + iter->nr_segs -= p - iter->__iov; + iter->__iov = p; + iter->iov_offset = skip; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_KVEC. + */ +static __always_inline +size_t iterate_kvec(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + const struct kvec *p = iter->kvec; + size_t progress = 0, skip = iter->iov_offset; + + do { + size_t remain, consumed; + size_t part = min(len, p->iov_len - skip); + + if (likely(part)) { + remain = step(p->iov_base + skip, progress, part, priv, priv2); + consumed = part - remain; + progress += consumed; + skip += consumed; + len -= consumed; + if (skip < p->iov_len) + break; + } + p++; + skip = 0; + } while (len); + + iter->nr_segs -= p - iter->kvec; + iter->kvec = p; + iter->iov_offset = skip; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_BVEC. + */ +static __always_inline +size_t iterate_bvec(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + const struct bio_vec *p = iter->bvec; + size_t progress = 0, skip = iter->iov_offset; + + do { + size_t remain, consumed; + size_t offset = p->bv_offset + skip, part; + void *kaddr = kmap_local_page(p->bv_page + offset / PAGE_SIZE); + + part = min3(len, + (size_t)(p->bv_len - skip), + (size_t)(PAGE_SIZE - offset % PAGE_SIZE)); + remain = step(kaddr + offset % PAGE_SIZE, progress, part, priv, priv2); + kunmap_local(kaddr); + consumed = part - remain; + len -= consumed; + progress += consumed; + skip += consumed; + if (skip >= p->bv_len) { + skip = 0; + p++; + } + if (remain) + break; + } while (len); + + iter->nr_segs -= p - iter->bvec; + iter->bvec = p; + iter->iov_offset = skip; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_XARRAY. + */ +static __always_inline +size_t iterate_xarray(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + struct folio *folio; + size_t progress = 0; + loff_t start = iter->xarray_start + iter->iov_offset; + pgoff_t index = start / PAGE_SIZE; + XA_STATE(xas, iter->xarray, index); + + rcu_read_lock(); + xas_for_each(&xas, folio, ULONG_MAX) { + size_t remain, consumed, offset, part, flen; + + if (xas_retry(&xas, folio)) + continue; + if (WARN_ON(xa_is_value(folio))) + break; + if (WARN_ON(folio_test_hugetlb(folio))) + break; + + offset = offset_in_folio(folio, start + progress); + flen = min(folio_size(folio) - offset, len); + + while (flen) { + void *base = kmap_local_folio(folio, offset); + + part = min_t(size_t, flen, + PAGE_SIZE - offset_in_page(offset)); + remain = step(base, progress, part, priv, priv2); + kunmap_local(base); + + consumed = part - remain; + progress += consumed; + len -= consumed; + + if (remain || len == 0) + goto out; + flen -= consumed; + offset += consumed; + } + } + +out: + rcu_read_unlock(); + iter->iov_offset += progress; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_DISCARD. + */ +static __always_inline +size_t iterate_discard(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + size_t progress = len; + + iter->count -= progress; + return progress; +} + +/** + * iterate_and_advance2 - Iterate over an iterator + * @iter: The iterator to iterate over. + * @len: The amount to iterate over. + * @priv: Data for the step functions. + * @priv2: More data for the step functions. + * @ustep: Function for UBUF/IOVEC iterators; given __user addresses. + * @step: Function for other iterators; given kernel addresses. + * + * Iterate over the next part of an iterator, up to the specified length. The + * buffer is presented in segments, which for kernel iteration are broken up by + * physical pages and mapped, with the mapped address being presented. + * + * Two step functions, @step and @ustep, must be provided, one for handling + * mapped kernel addresses and the other is given user addresses which have the + * potential to fault since no pinning is performed. + * + * The step functions are passed the address and length of the segment, @priv, + * @priv2 and the amount of data so far iterated over (which can, for example, + * be added to @priv to point to the right part of a second buffer). The step + * functions should return the amount of the segment they didn't process (ie. 0 + * indicates complete processsing). + * + * This function returns the amount of data processed (ie. 0 means nothing was + * processed and the value of @len means processes to completion). + */ +static __always_inline +size_t iterate_and_advance2(struct iov_iter *iter, size_t len, void *priv, + void *priv2, iov_ustep_f ustep, iov_step_f step) +{ + if (unlikely(iter->count < len)) + len = iter->count; + if (unlikely(!len)) + return 0; + + if (likely(iter_is_ubuf(iter))) + return iterate_ubuf(iter, len, priv, priv2, ustep); + if (likely(iter_is_iovec(iter))) + return iterate_iovec(iter, len, priv, priv2, ustep); + if (iov_iter_is_bvec(iter)) + return iterate_bvec(iter, len, priv, priv2, step); + if (iov_iter_is_kvec(iter)) + return iterate_kvec(iter, len, priv, priv2, step); + if (iov_iter_is_xarray(iter)) + return iterate_xarray(iter, len, priv, priv2, step); + return iterate_discard(iter, len, priv, priv2, step); +} + +/** + * iterate_and_advance - Iterate over an iterator + * @iter: The iterator to iterate over. + * @len: The amount to iterate over. + * @priv: Data for the step functions. + * @ustep: Function for UBUF/IOVEC iterators; given __user addresses. + * @step: Function for other iterators; given kernel addresses. + * + * As iterate_and_advance2(), but priv2 is always NULL. + */ +static __always_inline +size_t iterate_and_advance(struct iov_iter *iter, size_t len, void *priv, + iov_ustep_f ustep, iov_step_f step) +{ + return iterate_and_advance2(iter, len, priv, NULL, ustep, step); +} + +#endif /* _LINUX_IOV_ITER_H */ diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 227c9f536b94..65374ee91ecd 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -13,189 +13,69 @@ #include #include #include +#include -/* covers ubuf and kbuf alike */ -#define iterate_buf(i, n, base, len, off, __p, STEP) { \ - size_t __maybe_unused off = 0; \ - len = n; \ - base = __p + i->iov_offset; \ - len -= (STEP); \ - i->iov_offset += len; \ - n = len; \ -} - -/* covers iovec and kvec alike */ -#define iterate_iovec(i, n, base, len, off, __p, STEP) { \ - size_t off = 0; \ - size_t skip = i->iov_offset; \ - do { \ - len = min(n, __p->iov_len - skip); \ - if (likely(len)) { \ - base = __p->iov_base + skip; \ - len -= (STEP); \ - off += len; \ - skip += len; \ - n -= len; \ - if (skip < __p->iov_len) \ - break; \ - } \ - __p++; \ - skip = 0; \ - } while (n); \ - i->iov_offset = skip; \ - n = off; \ -} - -#define iterate_bvec(i, n, base, len, off, p, STEP) { \ - size_t off = 0; \ - unsigned skip = i->iov_offset; \ - while (n) { \ - unsigned offset = p->bv_offset + skip; \ - unsigned left; \ - void *kaddr = kmap_local_page(p->bv_page + \ - offset / PAGE_SIZE); \ - base = kaddr + offset % PAGE_SIZE; \ - len = min(min(n, (size_t)(p->bv_len - skip)), \ - (size_t)(PAGE_SIZE - offset % PAGE_SIZE)); \ - left = (STEP); \ - kunmap_local(kaddr); \ - len -= left; \ - off += len; \ - skip += len; \ - if (skip == p->bv_len) { \ - skip = 0; \ - p++; \ - } \ - n -= len; \ - if (left) \ - break; \ - } \ - i->iov_offset = skip; \ - n = off; \ -} - -#define iterate_xarray(i, n, base, len, __off, STEP) { \ - __label__ __out; \ - size_t __off = 0; \ - struct folio *folio; \ - loff_t start = i->xarray_start + i->iov_offset; \ - pgoff_t index = start / PAGE_SIZE; \ - XA_STATE(xas, i->xarray, index); \ - \ - len = PAGE_SIZE - offset_in_page(start); \ - rcu_read_lock(); \ - xas_for_each(&xas, folio, ULONG_MAX) { \ - unsigned left; \ - size_t offset; \ - if (xas_retry(&xas, folio)) \ - continue; \ - if (WARN_ON(xa_is_value(folio))) \ - break; \ - if (WARN_ON(folio_test_hugetlb(folio))) \ - break; \ - offset = offset_in_folio(folio, start + __off); \ - while (offset < folio_size(folio)) { \ - base = kmap_local_folio(folio, offset); \ - len = min(n, len); \ - left = (STEP); \ - kunmap_local(base); \ - len -= left; \ - __off += len; \ - n -= len; \ - if (left || n == 0) \ - goto __out; \ - offset += len; \ - len = PAGE_SIZE; \ - } \ - } \ -__out: \ - rcu_read_unlock(); \ - i->iov_offset += __off; \ - n = __off; \ -} - -#define __iterate_and_advance(i, n, base, len, off, I, K) { \ - if (unlikely(i->count < n)) \ - n = i->count; \ - if (likely(n)) { \ - if (likely(iter_is_ubuf(i))) { \ - void __user *base; \ - size_t len; \ - iterate_buf(i, n, base, len, off, \ - i->ubuf, (I)) \ - } else if (likely(iter_is_iovec(i))) { \ - const struct iovec *iov = iter_iov(i); \ - void __user *base; \ - size_t len; \ - iterate_iovec(i, n, base, len, off, \ - iov, (I)) \ - i->nr_segs -= iov - iter_iov(i); \ - i->__iov = iov; \ - } else if (iov_iter_is_bvec(i)) { \ - const struct bio_vec *bvec = i->bvec; \ - void *base; \ - size_t len; \ - iterate_bvec(i, n, base, len, off, \ - bvec, (K)) \ - i->nr_segs -= bvec - i->bvec; \ - i->bvec = bvec; \ - } else if (iov_iter_is_kvec(i)) { \ - const struct kvec *kvec = i->kvec; \ - void *base; \ - size_t len; \ - iterate_iovec(i, n, base, len, off, \ - kvec, (K)) \ - i->nr_segs -= kvec - i->kvec; \ - i->kvec = kvec; \ - } else if (iov_iter_is_xarray(i)) { \ - void *base; \ - size_t len; \ - iterate_xarray(i, n, base, len, off, \ - (K)) \ - } \ - i->count -= n; \ - } \ -} -#define iterate_and_advance(i, n, base, len, off, I, K) \ - __iterate_and_advance(i, n, base, len, off, I, ((void)(K),0)) - -static int copyout(void __user *to, const void *from, size_t n) +static __always_inline +size_t copy_to_user_iter(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) { if (should_fail_usercopy()) - return n; - if (access_ok(to, n)) { - instrument_copy_to_user(to, from, n); - n = raw_copy_to_user(to, from, n); + return len; + if (access_ok(iter_to, len)) { + from += progress; + instrument_copy_to_user(iter_to, from, len); + len = raw_copy_to_user(iter_to, from, len); } - return n; + return len; } -static int copyout_nofault(void __user *to, const void *from, size_t n) +static __always_inline +size_t copy_to_user_iter_nofault(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) { - long res; + ssize_t res; if (should_fail_usercopy()) - return n; - - res = copy_to_user_nofault(to, from, n); + return len; - return res < 0 ? n : res; + from += progress; + res = copy_to_user_nofault(iter_to, from, len); + return res < 0 ? len : res; } -static int copyin(void *to, const void __user *from, size_t n) +static __always_inline +size_t copy_from_user_iter(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) { - size_t res = n; + size_t res = len; if (should_fail_usercopy()) - return n; - if (access_ok(from, n)) { - instrument_copy_from_user_before(to, from, n); - res = raw_copy_from_user(to, from, n); - instrument_copy_from_user_after(to, from, n, res); + return len; + if (access_ok(iter_from, len)) { + to += progress; + instrument_copy_from_user_before(to, iter_from, len); + res = raw_copy_from_user(to, iter_from, len); + instrument_copy_from_user_after(to, iter_from, len, res); } return res; } +static __always_inline +size_t memcpy_to_iter(void *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + memcpy(iter_to, from + progress, len); + return 0; +} + +static __always_inline +size_t memcpy_from_iter(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + memcpy(to + progress, iter_from, len); + return 0; +} + /* * fault_in_iov_iter_readable - fault in iov iterator for reading * @i: iterator @@ -312,23 +192,29 @@ size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) return 0; if (user_backed_iter(i)) might_fault(); - iterate_and_advance(i, bytes, base, len, off, - copyout(base, addr + off, len), - memcpy(base, addr + off, len) - ) - - return bytes; + return iterate_and_advance(i, bytes, (void *)addr, + copy_to_user_iter, memcpy_to_iter); } EXPORT_SYMBOL(_copy_to_iter); #ifdef CONFIG_ARCH_HAS_COPY_MC -static int copyout_mc(void __user *to, const void *from, size_t n) -{ - if (access_ok(to, n)) { - instrument_copy_to_user(to, from, n); - n = copy_mc_to_user((__force void *) to, from, n); +static __always_inline +size_t copy_to_user_iter_mc(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + if (access_ok(iter_to, len)) { + from += progress; + instrument_copy_to_user(iter_to, from, len); + len = copy_mc_to_user(iter_to, from, len); } - return n; + return len; +} + +static __always_inline +size_t memcpy_to_iter_mc(void *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + return copy_mc_to_kernel(iter_to, from + progress, len); } /** @@ -361,22 +247,20 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i) return 0; if (user_backed_iter(i)) might_fault(); - __iterate_and_advance(i, bytes, base, len, off, - copyout_mc(base, addr + off, len), - copy_mc_to_kernel(base, addr + off, len) - ) - - return bytes; + return iterate_and_advance(i, bytes, (void *)addr, + copy_to_user_iter_mc, memcpy_to_iter_mc); } EXPORT_SYMBOL_GPL(_copy_mc_to_iter); #endif /* CONFIG_ARCH_HAS_COPY_MC */ -static void *memcpy_from_iter(struct iov_iter *i, void *to, const void *from, - size_t size) +static size_t memcpy_from_iter_mc(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) { - if (iov_iter_is_copy_mc(i)) - return (void *)copy_mc_to_kernel(to, from, size); - return memcpy(to, from, size); + struct iov_iter *iter = priv2; + + if (iov_iter_is_copy_mc(iter)) + return copy_mc_to_kernel(to + progress, iter_from, len); + return memcpy_from_iter(iter_from, progress, len, to, priv2); } size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) @@ -386,30 +270,46 @@ size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) if (user_backed_iter(i)) might_fault(); - iterate_and_advance(i, bytes, base, len, off, - copyin(addr + off, base, len), - memcpy_from_iter(i, addr + off, base, len) - ) - - return bytes; + return iterate_and_advance2(i, bytes, addr, i, + copy_from_user_iter, + memcpy_from_iter_mc); } EXPORT_SYMBOL(_copy_from_iter); +static __always_inline +size_t copy_from_user_iter_nocache(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + return __copy_from_user_inatomic_nocache(to + progress, iter_from, len); +} + size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i) { if (WARN_ON_ONCE(!i->data_source)) return 0; - iterate_and_advance(i, bytes, base, len, off, - __copy_from_user_inatomic_nocache(addr + off, base, len), - memcpy(addr + off, base, len) - ) - - return bytes; + return iterate_and_advance(i, bytes, addr, + copy_from_user_iter_nocache, + memcpy_from_iter); } EXPORT_SYMBOL(_copy_from_iter_nocache); #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE +static __always_inline +size_t copy_from_user_iter_flushcache(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + return __copy_from_user_flushcache(to + progress, iter_from, len); +} + +static __always_inline +size_t memcpy_from_iter_flushcache(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + memcpy_flushcache(to + progress, iter_from, len); + return 0; +} + /** * _copy_from_iter_flushcache - write destination through cpu cache * @addr: destination kernel address @@ -431,12 +331,9 @@ size_t _copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i) if (WARN_ON_ONCE(!i->data_source)) return 0; - iterate_and_advance(i, bytes, base, len, off, - __copy_from_user_flushcache(addr + off, base, len), - memcpy_flushcache(addr + off, base, len) - ) - - return bytes; + return iterate_and_advance(i, bytes, addr, + copy_from_user_iter_flushcache, + memcpy_from_iter_flushcache); } EXPORT_SYMBOL_GPL(_copy_from_iter_flushcache); #endif @@ -508,10 +405,9 @@ size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, size_t byte void *kaddr = kmap_local_page(page); size_t n = min(bytes, (size_t)PAGE_SIZE - offset); - iterate_and_advance(i, n, base, len, off, - copyout_nofault(base, kaddr + offset + off, len), - memcpy(base, kaddr + offset + off, len) - ) + n = iterate_and_advance(i, bytes, kaddr, + copy_to_user_iter_nofault, + memcpy_to_iter); kunmap_local(kaddr); res += n; bytes -= n; @@ -554,14 +450,25 @@ size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes, } EXPORT_SYMBOL(copy_page_from_iter); -size_t iov_iter_zero(size_t bytes, struct iov_iter *i) +static __always_inline +size_t zero_to_user_iter(void __user *iter_to, size_t progress, + size_t len, void *priv, void *priv2) { - iterate_and_advance(i, bytes, base, len, count, - clear_user(base, len), - memset(base, 0, len) - ) + return clear_user(iter_to, len); +} - return bytes; +static __always_inline +size_t zero_to_iter(void *iter_to, size_t progress, + size_t len, void *priv, void *priv2) +{ + memset(iter_to, 0, len); + return 0; +} + +size_t iov_iter_zero(size_t bytes, struct iov_iter *i) +{ + return iterate_and_advance(i, bytes, NULL, + zero_to_user_iter, zero_to_iter); } EXPORT_SYMBOL(iov_iter_zero); @@ -586,10 +493,9 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset, } p = kmap_atomic(page) + offset; - iterate_and_advance(i, n, base, len, off, - copyin(p + off, base, len), - memcpy_from_iter(i, p + off, base, len) - ) + n = iterate_and_advance2(i, n, p, i, + copy_from_user_iter, + memcpy_from_iter_mc); kunmap_atomic(p); copied += n; offset += n; @@ -1180,32 +1086,64 @@ ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages_alloc2); +static __always_inline +size_t copy_from_user_iter_csum(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + __wsum next, *csum = priv2; + + next = csum_and_copy_from_user(iter_from, to + progress, len); + *csum = csum_block_add(*csum, next, progress); + return next ? 0 : len; +} + +static __always_inline +size_t memcpy_from_iter_csum(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + __wsum *csum = priv2; + + *csum = csum_and_memcpy(to + progress, iter_from, len, *csum, progress); + return 0; +} + size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i) { - __wsum sum, next; - sum = *csum; if (WARN_ON_ONCE(!i->data_source)) return 0; - - iterate_and_advance(i, bytes, base, len, off, ({ - next = csum_and_copy_from_user(base, addr + off, len); - sum = csum_block_add(sum, next, off); - next ? 0 : len; - }), ({ - sum = csum_and_memcpy(addr + off, base, len, sum, off); - }) - ) - *csum = sum; - return bytes; + return iterate_and_advance2(i, bytes, addr, csum, + copy_from_user_iter_csum, + memcpy_from_iter_csum); } EXPORT_SYMBOL(csum_and_copy_from_iter); +static __always_inline +size_t copy_to_user_iter_csum(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + __wsum next, *csum = priv2; + + next = csum_and_copy_to_user(from + progress, iter_to, len); + *csum = csum_block_add(*csum, next, progress); + return next ? 0 : len; +} + +static __always_inline +size_t memcpy_to_iter_csum(void *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + __wsum *csum = priv2; + + *csum = csum_and_memcpy(iter_to, from + progress, len, *csum, progress); + return 0; +} + size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, struct iov_iter *i) { struct csum_state *csstate = _csstate; - __wsum sum, next; + __wsum sum; if (WARN_ON_ONCE(i->data_source)) return 0; @@ -1219,14 +1157,10 @@ size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, } sum = csum_shift(csstate->csum, csstate->off); - iterate_and_advance(i, bytes, base, len, off, ({ - next = csum_and_copy_to_user(addr + off, base, len); - sum = csum_block_add(sum, next, off); - next ? 0 : len; - }), ({ - sum = csum_and_memcpy(base, addr + off, len, sum, off); - }) - ) + + bytes = iterate_and_advance2(i, bytes, (void *)addr, &sum, + copy_to_user_iter_csum, + memcpy_to_iter_csum); csstate->csum = csum_shift(sum, csstate->off); csstate->off += bytes; return bytes; From patchwork Mon Sep 25 12:03:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C62DCE7A94 for ; Mon, 25 Sep 2023 12:05:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231719AbjIYMFr (ORCPT ); Mon, 25 Sep 2023 08:05:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231651AbjIYMFd (ORCPT ); Mon, 25 Sep 2023 08:05:33 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE3C2124 for ; Mon, 25 Sep 2023 05:03:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643422; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ENB7CtH73xnlPC5Szfr84FelPyw0n3TXhDinmckBTOE=; b=BFEkW4ctiMudCUG7eqS9vzak+AUlX6zdsBmIB562G2KuwUA8w2PZwznuV8WyY3rJHYR8gF DiPBJPRzEs7abTu8fBL4DhgI39QxCsavzMUVpcEALrT2wCn+IEP1ILVoZZGaMPdYQ+gLs4 aRL1bwbLHOKPJFBk4WtfVFhkdADp2qY= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-638-Tspro7EYNxWz9xyR8d5l5A-1; Mon, 25 Sep 2023 08:03:38 -0400 X-MC-Unique: Tspro7EYNxWz9xyR8d5l5A-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 07FBD3C1E9D8; Mon, 25 Sep 2023 12:03:38 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 609BB14171B6; Mon, 25 Sep 2023 12:03:36 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 08/12] iov_iter: Don't deal with iter->copy_mc in memcpy_from_iter_mc() Date: Mon, 25 Sep 2023 13:03:05 +0100 Message-ID: <20230925120309.1731676-9-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org iter->copy_mc is only used with a bvec iterator and only by dump_emit_page() in fs/coredump.c so rather than handle this in memcpy_from_iter_mc() where it is checked repeatedly by _copy_from_iter() and copy_page_from_iter_atomic(), Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- lib/iov_iter.c | 39 +++++++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 65374ee91ecd..943aa3cfd7b3 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -253,14 +253,33 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i) EXPORT_SYMBOL_GPL(_copy_mc_to_iter); #endif /* CONFIG_ARCH_HAS_COPY_MC */ -static size_t memcpy_from_iter_mc(void *iter_from, size_t progress, - size_t len, void *to, void *priv2) +static __always_inline +size_t memcpy_from_iter_mc(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + return copy_mc_to_kernel(to + progress, iter_from, len); +} + +static size_t __copy_from_iter_mc(void *addr, size_t bytes, struct iov_iter *i) { - struct iov_iter *iter = priv2; + size_t progress; - if (iov_iter_is_copy_mc(iter)) - return copy_mc_to_kernel(to + progress, iter_from, len); - return memcpy_from_iter(iter_from, progress, len, to, priv2); + if (unlikely(i->count < bytes)) + bytes = i->count; + if (unlikely(!bytes)) + return 0; + progress = iterate_bvec(i, bytes, addr, NULL, memcpy_from_iter_mc); + i->count -= progress; + return progress; +} + +static __always_inline +size_t __copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) +{ + if (unlikely(iov_iter_is_copy_mc(i))) + return __copy_from_iter_mc(addr, bytes, i); + return iterate_and_advance(i, bytes, addr, + copy_from_user_iter, memcpy_from_iter); } size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) @@ -270,9 +289,7 @@ size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) if (user_backed_iter(i)) might_fault(); - return iterate_and_advance2(i, bytes, addr, i, - copy_from_user_iter, - memcpy_from_iter_mc); + return __copy_from_iter(addr, bytes, i); } EXPORT_SYMBOL(_copy_from_iter); @@ -493,9 +510,7 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset, } p = kmap_atomic(page) + offset; - n = iterate_and_advance2(i, n, p, i, - copy_from_user_iter, - memcpy_from_iter_mc); + __copy_from_iter(p, n, i); kunmap_atomic(p); copied += n; offset += n; From patchwork Mon Sep 25 12:03:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 863A9CE7A94 for ; Mon, 25 Sep 2023 12:06:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231668AbjIYMGF (ORCPT ); Mon, 25 Sep 2023 08:06:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46732 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231667AbjIYMFl (ORCPT ); Mon, 25 Sep 2023 08:05:41 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9220C1A6 for ; Mon, 25 Sep 2023 05:03:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643425; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BfhdjrCXjlyEIl3POBY6DeJzxxNzVNAs8n4KMCiZdlo=; b=KIlkt9aJI24AVzoYdlpQbukf8iT6N3dBaaZ4yX6NWMyDFFlrJ+Q5gAVWhm7fHqZS5DLWCm VpSvkP4ZraoXWroSJSbQEfyIyrkWrZ8OyAM0iSsueShCnFASK/BGKvtpa3HxbS8fXZjz6b 8V57Z3flPBdhI5TKYD71eylSJBQagow= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-616-zhyKhm9iPvG_uiimJMIbdw-1; Mon, 25 Sep 2023 08:03:41 -0400 X-MC-Unique: zhyKhm9iPvG_uiimJMIbdw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D00E3185A78E; Mon, 25 Sep 2023 12:03:40 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id B187EC154CB; Mon, 25 Sep 2023 12:03:38 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v7 09/12] iov_iter, net: Move csum_and_copy_to/from_iter() to net/ Date: Mon, 25 Sep 2023 13:03:06 +0100 Message-ID: <20230925120309.1731676-10-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Move csum_and_copy_to/from_iter() to net code now that the iteration framework can be #included. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- include/linux/skbuff.h | 25 ++++++++++++ include/linux/uio.h | 18 --------- lib/iov_iter.c | 89 ------------------------------------------ net/core/datagram.c | 50 +++++++++++++++++++++++- net/core/skbuff.c | 33 ++++++++++++++++ 5 files changed, 107 insertions(+), 108 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 4174c4b82d13..d0656cc11c16 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3679,6 +3679,31 @@ static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int l return __skb_put_padto(skb, len, true); } +static inline __wsum csum_and_memcpy(void *to, const void *from, size_t len, + __wsum sum, size_t off) +{ + __wsum next = csum_partial_copy_nocheck(from, to, len); + return csum_block_add(sum, next, off); +} + +struct csum_state { + __wsum csum; + size_t off; +}; + +size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i); + +static __always_inline __must_check +bool csum_and_copy_from_iter_full(void *addr, size_t bytes, + __wsum *csum, struct iov_iter *i) +{ + size_t copied = csum_and_copy_from_iter(addr, bytes, csum, i); + if (likely(copied == bytes)) + return true; + iov_iter_revert(i, copied); + return false; +} + static inline int skb_add_data(struct sk_buff *skb, struct iov_iter *from, int copy) { diff --git a/include/linux/uio.h b/include/linux/uio.h index 65d9143f83c8..0a5426c97e02 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -338,24 +338,6 @@ iov_iter_npages_cap(struct iov_iter *i, int maxpages, size_t max_bytes) return npages; } -struct csum_state { - __wsum csum; - size_t off; -}; - -size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *csstate, struct iov_iter *i); -size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i); - -static __always_inline __must_check -bool csum_and_copy_from_iter_full(void *addr, size_t bytes, - __wsum *csum, struct iov_iter *i) -{ - size_t copied = csum_and_copy_from_iter(addr, bytes, csum, i); - if (likely(copied == bytes)) - return true; - iov_iter_revert(i, copied); - return false; -} size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, struct iov_iter *i); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 943aa3cfd7b3..fef934a8745d 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -10,7 +10,6 @@ #include #include #include -#include #include #include #include @@ -179,13 +178,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction, } EXPORT_SYMBOL(iov_iter_init); -static __wsum csum_and_memcpy(void *to, const void *from, size_t len, - __wsum sum, size_t off) -{ - __wsum next = csum_partial_copy_nocheck(from, to, len); - return csum_block_add(sum, next, off); -} - size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) { if (WARN_ON_ONCE(i->data_source)) @@ -1101,87 +1093,6 @@ ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages_alloc2); -static __always_inline -size_t copy_from_user_iter_csum(void __user *iter_from, size_t progress, - size_t len, void *to, void *priv2) -{ - __wsum next, *csum = priv2; - - next = csum_and_copy_from_user(iter_from, to + progress, len); - *csum = csum_block_add(*csum, next, progress); - return next ? 0 : len; -} - -static __always_inline -size_t memcpy_from_iter_csum(void *iter_from, size_t progress, - size_t len, void *to, void *priv2) -{ - __wsum *csum = priv2; - - *csum = csum_and_memcpy(to + progress, iter_from, len, *csum, progress); - return 0; -} - -size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, - struct iov_iter *i) -{ - if (WARN_ON_ONCE(!i->data_source)) - return 0; - return iterate_and_advance2(i, bytes, addr, csum, - copy_from_user_iter_csum, - memcpy_from_iter_csum); -} -EXPORT_SYMBOL(csum_and_copy_from_iter); - -static __always_inline -size_t copy_to_user_iter_csum(void __user *iter_to, size_t progress, - size_t len, void *from, void *priv2) -{ - __wsum next, *csum = priv2; - - next = csum_and_copy_to_user(from + progress, iter_to, len); - *csum = csum_block_add(*csum, next, progress); - return next ? 0 : len; -} - -static __always_inline -size_t memcpy_to_iter_csum(void *iter_to, size_t progress, - size_t len, void *from, void *priv2) -{ - __wsum *csum = priv2; - - *csum = csum_and_memcpy(iter_to, from + progress, len, *csum, progress); - return 0; -} - -size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, - struct iov_iter *i) -{ - struct csum_state *csstate = _csstate; - __wsum sum; - - if (WARN_ON_ONCE(i->data_source)) - return 0; - if (unlikely(iov_iter_is_discard(i))) { - // can't use csum_memcpy() for that one - data is not copied - csstate->csum = csum_block_add(csstate->csum, - csum_partial(addr, bytes, 0), - csstate->off); - csstate->off += bytes; - return bytes; - } - - sum = csum_shift(csstate->csum, csstate->off); - - bytes = iterate_and_advance2(i, bytes, (void *)addr, &sum, - copy_to_user_iter_csum, - memcpy_to_iter_csum); - csstate->csum = csum_shift(sum, csstate->off); - csstate->off += bytes; - return bytes; -} -EXPORT_SYMBOL(csum_and_copy_to_iter); - size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, struct iov_iter *i) { diff --git a/net/core/datagram.c b/net/core/datagram.c index 176eb5834746..37c89d0933b7 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -50,7 +50,7 @@ #include #include #include -#include +#include #include #include @@ -716,6 +716,54 @@ int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *from) } EXPORT_SYMBOL(zerocopy_sg_from_iter); +static __always_inline +size_t copy_to_user_iter_csum(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + __wsum next, *csum = priv2; + + next = csum_and_copy_to_user(from + progress, iter_to, len); + *csum = csum_block_add(*csum, next, progress); + return next ? 0 : len; +} + +static __always_inline +size_t memcpy_to_iter_csum(void *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + __wsum *csum = priv2; + + *csum = csum_and_memcpy(iter_to, from + progress, len, *csum, progress); + return 0; +} + +static size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, + struct iov_iter *i) +{ + struct csum_state *csstate = _csstate; + __wsum sum; + + if (WARN_ON_ONCE(i->data_source)) + return 0; + if (unlikely(iov_iter_is_discard(i))) { + // can't use csum_memcpy() for that one - data is not copied + csstate->csum = csum_block_add(csstate->csum, + csum_partial(addr, bytes, 0), + csstate->off); + csstate->off += bytes; + return bytes; + } + + sum = csum_shift(csstate->csum, csstate->off); + + bytes = iterate_and_advance2(i, bytes, (void *)addr, &sum, + copy_to_user_iter_csum, + memcpy_to_iter_csum); + csstate->csum = csum_shift(sum, csstate->off); + csstate->off += bytes; + return bytes; +} + /** * skb_copy_and_csum_datagram - Copy datagram to an iovec iterator * and update a checksum. diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 4eaf7ed0d1f4..5dbdfce2d05f 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -62,6 +62,7 @@ #include #include #include +#include #include #include @@ -6931,3 +6932,35 @@ ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, return spliced ?: ret; } EXPORT_SYMBOL(skb_splice_from_iter); + +static __always_inline +size_t memcpy_from_iter_csum(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + __wsum *csum = priv2; + + *csum = csum_and_memcpy(to + progress, iter_from, len, *csum, progress); + return 0; +} + +static __always_inline +size_t copy_from_user_iter_csum(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + __wsum next, *csum = priv2; + + next = csum_and_copy_from_user(iter_from, to + progress, len); + *csum = csum_block_add(*csum, next, progress); + return next ? 0 : len; +} + +size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, + struct iov_iter *i) +{ + if (WARN_ON_ONCE(!i->data_source)) + return 0; + return iterate_and_advance2(i, bytes, addr, csum, + copy_from_user_iter_csum, + memcpy_from_iter_csum); +} +EXPORT_SYMBOL(csum_and_copy_from_iter); From patchwork Mon Sep 25 12:03:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397714 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6F63CE7A98 for ; Mon, 25 Sep 2023 12:06:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231518AbjIYMGH (ORCPT ); Mon, 25 Sep 2023 08:06:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231502AbjIYMFl (ORCPT ); Mon, 25 Sep 2023 08:05:41 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85C0111D for ; Mon, 25 Sep 2023 05:03:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643428; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i+29vAFZrXF9naWng0ipN/2z31gEil+IMifE0VymLF4=; b=MhhrPbD7wovqiInw+UMMDQFrAOlkoWuL3XCniuDmki1iDIQdW2hLNETr+3InnTHITAnlPW 4o5hX8pHn3Vsm7vF9Fe/ZXXbAkmxmMechusmTJKNB6GQl84z6bz0DdwwhzsBJmuyQi1lCk jy5Yfg8pLdDknlkAllqbxybZJPg3/BA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-47-rBgXHwEcPtC09KsWTkJlvw-1; Mon, 25 Sep 2023 08:03:44 -0400 X-MC-Unique: rBgXHwEcPtC09KsWTkJlvw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7DCE3800045; Mon, 25 Sep 2023 12:03:43 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 770E5C15BB8; Mon, 25 Sep 2023 12:03:41 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v7 10/12] iov_iter, net: Fold in csum_and_memcpy() Date: Mon, 25 Sep 2023 13:03:07 +0100 Message-ID: <20230925120309.1731676-11-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Fold csum_and_memcpy() in to its callers. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- include/linux/skbuff.h | 7 ------- net/core/datagram.c | 3 ++- net/core/skbuff.c | 3 ++- 3 files changed, 4 insertions(+), 9 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index d0656cc11c16..c81ef5d76953 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3679,13 +3679,6 @@ static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int l return __skb_put_padto(skb, len, true); } -static inline __wsum csum_and_memcpy(void *to, const void *from, size_t len, - __wsum sum, size_t off) -{ - __wsum next = csum_partial_copy_nocheck(from, to, len); - return csum_block_add(sum, next, off); -} - struct csum_state { __wsum csum; size_t off; diff --git a/net/core/datagram.c b/net/core/datagram.c index 37c89d0933b7..452620dd41e4 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -732,8 +732,9 @@ size_t memcpy_to_iter_csum(void *iter_to, size_t progress, size_t len, void *from, void *priv2) { __wsum *csum = priv2; + __wsum next = csum_partial_copy_nocheck(from, iter_to, len); - *csum = csum_and_memcpy(iter_to, from + progress, len, *csum, progress); + *csum = csum_block_add(*csum, next, progress); return 0; } diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 5dbdfce2d05f..3efed86321db 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6938,8 +6938,9 @@ size_t memcpy_from_iter_csum(void *iter_from, size_t progress, size_t len, void *to, void *priv2) { __wsum *csum = priv2; + __wsum next = csum_partial_copy_nocheck(iter_from, to + progress, len); - *csum = csum_and_memcpy(to + progress, iter_from, len, *csum, progress); + *csum = csum_block_add(*csum, next, progress); return 0; } From patchwork Mon Sep 25 12:03:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00519CE7A97 for ; Mon, 25 Sep 2023 12:06:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231698AbjIYMGJ (ORCPT ); Mon, 25 Sep 2023 08:06:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231506AbjIYMFl (ORCPT ); Mon, 25 Sep 2023 08:05:41 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 152DA19A for ; Mon, 25 Sep 2023 05:03:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643430; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OEnm3txfdKWMLGG8rrdFEcQBV2WWgkp8Kq4P7DhBB1I=; b=ARtO1FlXFzv1/0IZoaE4oU8ZGeVcomrHEXOZt+4HQ5HnZcqq2n5j/p9Hpc/ZUlAustnwpq hMG4PZ90lam8Ek+JIRfhnoSryWaApROfHcnXQkluPFNM6Rw74kanoqTFJ+POHncCBvj13u oZl8x3LL8nWPOLSN7DmxlXXDNHuMq+g= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-436-X-q17QG2OBu5zjjUIF3oyA-1; Mon, 25 Sep 2023 08:03:47 -0400 X-MC-Unique: X-q17QG2OBu5zjjUIF3oyA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3C1CD811E93; Mon, 25 Sep 2023 12:03:46 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 337AD40C6EA8; Mon, 25 Sep 2023 12:03:44 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v7 11/12] iov_iter, net: Merge csum_and_copy_from_iter{,_full}() together Date: Mon, 25 Sep 2023 13:03:08 +0100 Message-ID: <20230925120309.1731676-12-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Move csum_and_copy_from_iter_full() out of line and then merge csum_and_copy_from_iter() into its only caller. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- include/linux/skbuff.h | 19 ++----------------- net/core/datagram.c | 5 +++++ net/core/skbuff.c | 20 +++++++++++++------- 3 files changed, 20 insertions(+), 24 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index c81ef5d76953..be402f55f6d6 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3679,23 +3679,8 @@ static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int l return __skb_put_padto(skb, len, true); } -struct csum_state { - __wsum csum; - size_t off; -}; - -size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i); - -static __always_inline __must_check -bool csum_and_copy_from_iter_full(void *addr, size_t bytes, - __wsum *csum, struct iov_iter *i) -{ - size_t copied = csum_and_copy_from_iter(addr, bytes, csum, i); - if (likely(copied == bytes)) - return true; - iov_iter_revert(i, copied); - return false; -} +bool csum_and_copy_from_iter_full(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i) + __must_check; static inline int skb_add_data(struct sk_buff *skb, struct iov_iter *from, int copy) diff --git a/net/core/datagram.c b/net/core/datagram.c index 452620dd41e4..722311eeee18 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -738,6 +738,11 @@ size_t memcpy_to_iter_csum(void *iter_to, size_t progress, return 0; } +struct csum_state { + __wsum csum; + size_t off; +}; + static size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, struct iov_iter *i) { diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 3efed86321db..2bfa6a7ba244 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6955,13 +6955,19 @@ size_t copy_from_user_iter_csum(void __user *iter_from, size_t progress, return next ? 0 : len; } -size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, - struct iov_iter *i) +bool csum_and_copy_from_iter_full(void *addr, size_t bytes, + __wsum *csum, struct iov_iter *i) { + size_t copied; + if (WARN_ON_ONCE(!i->data_source)) - return 0; - return iterate_and_advance2(i, bytes, addr, csum, - copy_from_user_iter_csum, - memcpy_from_iter_csum); + return false; + copied = iterate_and_advance2(i, bytes, addr, csum, + copy_from_user_iter_csum, + memcpy_from_iter_csum); + if (likely(copied == bytes)) + return true; + iov_iter_revert(i, copied); + return false; } -EXPORT_SYMBOL(csum_and_copy_from_iter); +EXPORT_SYMBOL(csum_and_copy_from_iter_full); From patchwork Mon Sep 25 12:03:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13397716 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01088CE7A81 for ; Mon, 25 Sep 2023 12:06:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231415AbjIYMGK (ORCPT ); Mon, 25 Sep 2023 08:06:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231517AbjIYMFl (ORCPT ); Mon, 25 Sep 2023 08:05:41 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BC841B1 for ; Mon, 25 Sep 2023 05:03:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695643431; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IFANTKBqjctjMhYlR9DlpaXTtUIzXQ6esPI4cVawB1c=; b=EKxxOAEdxiDl0y+1I67kSNEvdKN6vYAyS6+BrtUOefq179aV6Kmn9T5qlr2qgApNAD8LQ9 lFs/cCtGaQShjbeDgTSy0mjb23ScRcz9RIYvYy93Y6KNL/H2NmQsV0jMivzjdh0mfOJxTS AKqDdfTB1ZnB1QPkmzz+oQe559ohCT0= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-381-Yv4-_50VO1WoEpBUsV8YeQ-1; Mon, 25 Sep 2023 08:03:49 -0400 X-MC-Unique: Yv4-_50VO1WoEpBUsV8YeQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D8A8829AB3FD; Mon, 25 Sep 2023 12:03:48 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id CE54A40C2064; Mon, 25 Sep 2023 12:03:46 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v7 12/12] iov_iter, net: Move hash_and_copy_to_iter() to net/ Date: Mon, 25 Sep 2023 13:03:09 +0100 Message-ID: <20230925120309.1731676-13-dhowells@redhat.com> In-Reply-To: <20230925120309.1731676-1-dhowells@redhat.com> References: <20230925120309.1731676-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Move hash_and_copy_to_iter() to be with its only caller in networking code. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- include/linux/uio.h | 3 --- lib/iov_iter.c | 20 -------------------- net/core/datagram.c | 19 +++++++++++++++++++ 3 files changed, 19 insertions(+), 23 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 0a5426c97e02..b6214cbf2a43 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -338,9 +338,6 @@ iov_iter_npages_cap(struct iov_iter *i, int maxpages, size_t max_bytes) return npages; } -size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, - struct iov_iter *i); - struct iovec *iovec_from_user(const struct iovec __user *uvector, unsigned long nr_segs, unsigned long fast_segs, struct iovec *fast_iov, bool compat); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index fef934a8745d..2547c96d56c7 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1,5 +1,4 @@ // SPDX-License-Identifier: GPL-2.0-only -#include #include #include #include @@ -1093,25 +1092,6 @@ ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages_alloc2); -size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, - struct iov_iter *i) -{ -#ifdef CONFIG_CRYPTO_HASH - struct ahash_request *hash = hashp; - struct scatterlist sg; - size_t copied; - - copied = copy_to_iter(addr, bytes, i); - sg_init_one(&sg, addr, copied); - ahash_request_set_crypt(hash, &sg, NULL, copied); - crypto_ahash_update(hash); - return copied; -#else - return 0; -#endif -} -EXPORT_SYMBOL(hash_and_copy_to_iter); - static int iov_npages(const struct iov_iter *i, int maxpages) { size_t skip = i->iov_offset, size = i->count; diff --git a/net/core/datagram.c b/net/core/datagram.c index 722311eeee18..103d46fa0eeb 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -61,6 +61,7 @@ #include #include #include +#include /* * Is a socket 'connection oriented' ? @@ -489,6 +490,24 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset, return 0; } +static size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, + struct iov_iter *i) +{ +#ifdef CONFIG_CRYPTO_HASH + struct ahash_request *hash = hashp; + struct scatterlist sg; + size_t copied; + + copied = copy_to_iter(addr, bytes, i); + sg_init_one(&sg, addr, copied); + ahash_request_set_crypt(hash, &sg, NULL, copied); + crypto_ahash_update(hash); + return copied; +#else + return 0; +#endif +} + /** * skb_copy_and_hash_datagram_iter - Copy datagram to an iovec iterator * and update a hash.