From patchwork Fri Sep 22 12:02:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 629CECD4942 for ; Fri, 22 Sep 2023 12:03:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233999AbjIVMDi (ORCPT ); Fri, 22 Sep 2023 08:03:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231301AbjIVMDd (ORCPT ); Fri, 22 Sep 2023 08:03:33 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 33E36198 for ; Fri, 22 Sep 2023 05:02:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384160; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LZvLwoc939E6B4TZcwRPSf2buY4OlPtMZfRg4M/9mP0=; b=ZE1Q66RMltzG/3H2hIqk1DESyqIeQJnYO0cgCR5DraLy4ts/wwfk1Hsh46AlrTl5h8oncg EzogCJvXmiDf173TkG+Tlwnw25a6sBFisCtVrgpIyL1hY2ieHKp5LFf4ZwsVx/N5qh36uR Mk+dU5Ny9LwaN35fJ1rbEHqabiAbIEY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-396-9KECr8cHMfC82Ed_BGv0kQ-1; Fri, 22 Sep 2023 08:02:35 -0400 X-MC-Unique: 9KECr8cHMfC82Ed_BGv0kQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4981C803470; Fri, 22 Sep 2023 12:02:34 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9AA7310F1BE8; Fri, 22 Sep 2023 12:02:32 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 01/13] iov_iter: Remove last_offset from iov_iter as it was for ITER_PIPE Date: Fri, 22 Sep 2023 13:02:15 +0100 Message-ID: <20230922120227.1173720-2-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Now that ITER_PIPE has been removed, iov_iter::last_offset is no longer used, so remove it. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/uio.h | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 42bce38a8e87..2000e42a6586 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -44,10 +44,7 @@ struct iov_iter { bool nofault; bool data_source; bool user_backed; - union { - size_t iov_offset; - int last_offset; - }; + size_t iov_offset; /* * Hack alert: overlay ubuf_iovec with iovec + count, so * that the members resolve correctly regardless of the type From patchwork Fri Sep 22 12:02:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395716 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2362C04AAB for ; Fri, 22 Sep 2023 12:03:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233961AbjIVMDm (ORCPT ); Fri, 22 Sep 2023 08:03:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233998AbjIVMDi (ORCPT ); Fri, 22 Sep 2023 08:03:38 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FC9AFB for ; Fri, 22 Sep 2023 05:02:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384162; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oQ7llY41C7nXc6PO5+rWDb67yJZYIoiIAsyEYwhjg+w=; b=e4BWwMdFAmBmYinrIhys6uhQmYkJq05vmQkSqm/9gvpi6HiiEV890HzY1s1x4Nf4xhco2u cUY2Zlsj45d80Q7N6RqiU9sv73kU7M6b0sS2Sh7p0Ful/tFu7qlvk6/QO2KutHwkvWThMN HufAtXu2ZkfXmlbh87Zy588pSS9dskc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-295-8-QZXS7iNU2GtXgc29v2FQ-1; Fri, 22 Sep 2023 08:02:37 -0400 X-MC-Unique: 8-QZXS7iNU2GtXgc29v2FQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9E637185A78E; Fri, 22 Sep 2023 12:02:36 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id DF47A40C6EBF; Fri, 22 Sep 2023 12:02:34 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Dan Williams Subject: [PATCH v6 02/13] iov_iter: Be consistent about the __user tag on copy_mc_to_user() Date: Fri, 22 Sep 2023 13:02:16 +0100 Message-ID: <20230922120227.1173720-3-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org copy_mc_to_user() has the destination marked __user on powerpc, but not on x86; the latter results in a sparse warning in lib/iov_iter.c. Fix this by applying the tag on x86 too. Fixes: ec6347bb4339 ("x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}()") Signed-off-by: David Howells cc: Dan Williams cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- arch/x86/include/asm/uaccess.h | 2 +- arch/x86/lib/copy_mc.c | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 8bae40a66282..5c367c1290c3 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -496,7 +496,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len); #define copy_mc_to_kernel copy_mc_to_kernel unsigned long __must_check -copy_mc_to_user(void *to, const void *from, unsigned len); +copy_mc_to_user(void __user *to, const void *from, unsigned len); #endif /* diff --git a/arch/x86/lib/copy_mc.c b/arch/x86/lib/copy_mc.c index 80efd45a7761..6e8b7e600def 100644 --- a/arch/x86/lib/copy_mc.c +++ b/arch/x86/lib/copy_mc.c @@ -70,23 +70,23 @@ unsigned long __must_check copy_mc_to_kernel(void *dst, const void *src, unsigne } EXPORT_SYMBOL_GPL(copy_mc_to_kernel); -unsigned long __must_check copy_mc_to_user(void *dst, const void *src, unsigned len) +unsigned long __must_check copy_mc_to_user(void __user *dst, const void *src, unsigned len) { unsigned long ret; if (copy_mc_fragile_enabled) { __uaccess_begin(); - ret = copy_mc_fragile(dst, src, len); + ret = copy_mc_fragile((__force void *)dst, src, len); __uaccess_end(); return ret; } if (static_cpu_has(X86_FEATURE_ERMS)) { __uaccess_begin(); - ret = copy_mc_enhanced_fast_string(dst, src, len); + ret = copy_mc_enhanced_fast_string((__force void *)dst, src, len); __uaccess_end(); return ret; } - return copy_user_generic(dst, src, len); + return copy_user_generic((__force void *)dst, src, len); } From patchwork Fri Sep 22 12:02:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395714 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 906F6CD4945 for ; Fri, 22 Sep 2023 12:03:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233977AbjIVMDk (ORCPT ); Fri, 22 Sep 2023 08:03:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233981AbjIVMDh (ORCPT ); Fri, 22 Sep 2023 08:03:37 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9BCA61A1 for ; Fri, 22 Sep 2023 05:02:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384164; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7BA8QLNemiJIXUC+XXC0PAivfRO0j2uaFr7AQHg4nPM=; b=IRx7GXIGJkFUV8slMfO0joaakyX03lhjwcKa3168wfsALPG/OMIA6mE/V3Uh/2UaJQtyg2 mMpaNLusO4gcIlb24nldK8WuhPKke+zi89+MZZmhXoAUBS5xNRqT4uPLyzmU3F3+tMKWzn RKAGi7xMtZJ6816InyjbOzkdCAo5X0Q= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-578-VMVqFlzOOAy8DJUDv5pSRw-1; Fri, 22 Sep 2023 08:02:41 -0400 X-MC-Unique: VMVqFlzOOAy8DJUDv5pSRw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0CD583816C82; Fri, 22 Sep 2023 12:02:40 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 590362156701; Fri, 22 Sep 2023 12:02:37 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Jaroslav Kysela , Takashi Iwai , Oswald Buddenhagen , Suren Baghdasaryan , Kuninori Morimoto , alsa-devel@alsa-project.org Subject: [PATCH v6 03/13] sound: Fix snd_pcm_readv()/writev() to use iov access functions Date: Fri, 22 Sep 2023 13:02:17 +0100 Message-ID: <20230922120227.1173720-4-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Fix snd_pcm_readv()/writev() to use iov access functions rather than poking at the iov_iter internals directly. Signed-off-by: David Howells Reviewed-by: Jaroslav Kysela Reviewed-by: Takashi Iwai cc: Oswald Buddenhagen cc: Jens Axboe cc: Suren Baghdasaryan cc: Kuninori Morimoto cc: alsa-devel@alsa-project.org --- sound/core/pcm_native.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c index bd9ddf412b46..9a69236fa207 100644 --- a/sound/core/pcm_native.c +++ b/sound/core/pcm_native.c @@ -3527,7 +3527,7 @@ static ssize_t snd_pcm_readv(struct kiocb *iocb, struct iov_iter *to) if (runtime->state == SNDRV_PCM_STATE_OPEN || runtime->state == SNDRV_PCM_STATE_DISCONNECTED) return -EBADFD; - if (!to->user_backed) + if (!user_backed_iter(to)) return -EINVAL; if (to->nr_segs > 1024 || to->nr_segs != runtime->channels) return -EINVAL; @@ -3567,7 +3567,7 @@ static ssize_t snd_pcm_writev(struct kiocb *iocb, struct iov_iter *from) if (runtime->state == SNDRV_PCM_STATE_OPEN || runtime->state == SNDRV_PCM_STATE_DISCONNECTED) return -EBADFD; - if (!from->user_backed) + if (!user_backed_iter(from)) return -EINVAL; if (from->nr_segs > 128 || from->nr_segs != runtime->channels || !frame_aligned(runtime, iov->iov_len)) From patchwork Fri Sep 22 12:02:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395715 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BAB4CD4F5B for ; Fri, 22 Sep 2023 12:03:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234012AbjIVMDj (ORCPT ); Fri, 22 Sep 2023 08:03:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233973AbjIVMDe (ORCPT ); Fri, 22 Sep 2023 08:03:34 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9348D19A for ; Fri, 22 Sep 2023 05:02:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384164; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=j/K7bby96tUMA/iC07lWusXqTlhgR9zlJgJ7++h0ZQI=; b=X6dgfIubLYESfKa8RKr49REPzUWqJhpVBUyDwsdrxADSYxr+92BOemF0rDOU1QM1nqTi7U 1Tx/hf2fcEEhrEB+Dsss+DYyNc+XxO2NjZCZj0Hyx3UrIxsWjs+rAZQPgdN+zW83nqFibL qckL1mOmx3/lpcCRNtgBi96W1FqDOsc= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-160-cipZYxGtNhaQj5TKw06UkQ-1; Fri, 22 Sep 2023 08:02:43 -0400 X-MC-Unique: cipZYxGtNhaQj5TKw06UkQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C824A1C01731; Fri, 22 Sep 2023 12:02:42 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9FF4820268D6; Fri, 22 Sep 2023 12:02:40 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Dennis Dalessandro , Jason Gunthorpe , Leon Romanovsky , linux-rdma@vger.kernel.org Subject: [PATCH v6 04/13] infiniband: Use user_backed_iter() to see if iterator is UBUF/IOVEC Date: Fri, 22 Sep 2023 13:02:18 +0100 Message-ID: <20230922120227.1173720-5-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use user_backed_iter() to see if iterator is UBUF/IOVEC rather than poking inside the iterator. Signed-off-by: David Howells cc: Dennis Dalessandro cc: Jason Gunthorpe cc: Leon Romanovsky cc: linux-rdma@vger.kernel.org --- drivers/infiniband/hw/hfi1/file_ops.c | 2 +- drivers/infiniband/hw/qib/qib_file_ops.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c index a5ab22cedd41..788fc249234f 100644 --- a/drivers/infiniband/hw/hfi1/file_ops.c +++ b/drivers/infiniband/hw/hfi1/file_ops.c @@ -267,7 +267,7 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from) if (!HFI1_CAP_IS_KSET(SDMA)) return -EINVAL; - if (!from->user_backed) + if (!user_backed_iter(from)) return -EINVAL; idx = srcu_read_lock(&fd->pq_srcu); pq = srcu_dereference(fd->pq, &fd->pq_srcu); diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c index 152952127f13..29e4c59aa23b 100644 --- a/drivers/infiniband/hw/qib/qib_file_ops.c +++ b/drivers/infiniband/hw/qib/qib_file_ops.c @@ -2244,7 +2244,7 @@ static ssize_t qib_write_iter(struct kiocb *iocb, struct iov_iter *from) struct qib_ctxtdata *rcd = ctxt_fp(iocb->ki_filp); struct qib_user_sdma_queue *pq = fp->pq; - if (!from->user_backed || !from->nr_segs || !pq) + if (!user_backed_iter(from) || !from->nr_segs || !pq) return -EINVAL; return qib_user_sdma_writev(rcd, pq, iter_iov(from), from->nr_segs); From patchwork Fri Sep 22 12:02:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395717 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF8FAC04AAB for ; Fri, 22 Sep 2023 12:04:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233971AbjIVMEd (ORCPT ); Fri, 22 Sep 2023 08:04:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234084AbjIVMEP (ORCPT ); Fri, 22 Sep 2023 08:04:15 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C60CE1A6 for ; Fri, 22 Sep 2023 05:02:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384169; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oQjxv/qWpzFGTlwOWGQlwUCcgnofDI2I99l4qJgz63Y=; b=K0/NYz8xoMcZ2A8lZcJ5zU0G9qcMD93EvMuiNekVOe3v7HAn/qOW9LrDZENW2fAdandVwF 8QasQN+eh3zXhyIeX6Z3S531q9r8VkVy4E1U0B+7Q7wY7F8pDdjDGCkEXBZOd16imnTD8F XWBk2jrZDxVd5+8UqtNjJ5ybJLISeF8= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-383-AeRfjP2MMb6jmnzsRdIcog-1; Fri, 22 Sep 2023 08:02:46 -0400 X-MC-Unique: AeRfjP2MMb6jmnzsRdIcog-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3B51B3C13505; Fri, 22 Sep 2023 12:02:45 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8B15651E3; Fri, 22 Sep 2023 12:02:43 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 05/13] iov_iter: Renumber ITER_* constants Date: Fri, 22 Sep 2023 13:02:19 +0100 Message-ID: <20230922120227.1173720-6-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Renumber the ITER_* iterator-type constants to put things in the same order as in the iteration functions and to group user-backed iterators at the bottom. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/uio.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 2000e42a6586..bef8e56aa45c 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -21,12 +21,12 @@ struct kvec { enum iter_type { /* iter types */ + ITER_UBUF, ITER_IOVEC, - ITER_KVEC, ITER_BVEC, + ITER_KVEC, ITER_XARRAY, ITER_DISCARD, - ITER_UBUF, }; #define ITER_SOURCE 1 // == WRITE From patchwork Fri Sep 22 12:02:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395748 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06E2CCD4945 for ; Fri, 22 Sep 2023 12:04:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234049AbjIVMEq (ORCPT ); Fri, 22 Sep 2023 08:04:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234094AbjIVMES (ORCPT ); Fri, 22 Sep 2023 08:04:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 676091AD for ; Fri, 22 Sep 2023 05:02:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384169; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4obcSyEH8f0LI+hqbulIObwIk6RPpIYigX0Ml5LHEtM=; b=Dx0KEILAwIndGxSIHHAWdtfnDaFdDf3yYAQeztNpqFsAErcpDDPVRkswmZvYvbuZBGqK1L MUzq51v5p2OLaSCvRSGmGR7xHME/O/aNfw8rEGO3/spTaHd/13zZz2ohX2eZJ4ciqEhjTV EmntkGUlFsI/ZD8Hyuj7Xy5GbV6lmnw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-183-mKXZqPWoOu676PZGhvpTxw-1; Fri, 22 Sep 2023 08:02:48 -0400 X-MC-Unique: mKXZqPWoOu676PZGhvpTxw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7EEDD185A79B; Fri, 22 Sep 2023 12:02:47 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id D33FBC15BB8; Fri, 22 Sep 2023 12:02:45 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 06/13] iov_iter: Derive user-backedness from the iterator type Date: Fri, 22 Sep 2023 13:02:20 +0100 Message-ID: <20230922120227.1173720-7-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use the iterator type to determine whether an iterator is user-backed or not rather than using a special flag for it. Now that ITER_UBUF and ITER_IOVEC are 0 and 1, they can be checked with a single comparison. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- include/linux/uio.h | 4 +--- lib/iov_iter.c | 1 - 2 files changed, 1 insertion(+), 4 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index bef8e56aa45c..65d9143f83c8 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -43,7 +43,6 @@ struct iov_iter { bool copy_mc; bool nofault; bool data_source; - bool user_backed; size_t iov_offset; /* * Hack alert: overlay ubuf_iovec with iovec + count, so @@ -140,7 +139,7 @@ static inline unsigned char iov_iter_rw(const struct iov_iter *i) static inline bool user_backed_iter(const struct iov_iter *i) { - return i->user_backed; + return iter_is_ubuf(i) || iter_is_iovec(i); } /* @@ -380,7 +379,6 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction, *i = (struct iov_iter) { .iter_type = ITER_UBUF, .copy_mc = false, - .user_backed = true, .data_source = direction, .ubuf = buf, .count = count, diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 27234a820eeb..227c9f536b94 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -290,7 +290,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction, .iter_type = ITER_IOVEC, .copy_mc = false, .nofault = false, - .user_backed = true, .data_source = direction, .__iov = iov, .nr_segs = nr_segs, From patchwork Fri Sep 22 12:02:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395718 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96B69CD4F5B for ; Fri, 22 Sep 2023 12:04:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234126AbjIVMEc (ORCPT ); Fri, 22 Sep 2023 08:04:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233971AbjIVMEP (ORCPT ); Fri, 22 Sep 2023 08:04:15 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8B451BF for ; Fri, 22 Sep 2023 05:02:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384174; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=saXGiomC5bnv9RAKI0XpVlyY0MbcRBlM5OC3bKVXk+c=; b=MiY6I7DZrUKxie0Twub36JwWwFFYZhTFOG5jAeWbRpiT4j96ztIaSi+04GPCpKQTa6ARti p8bPK0z+VSch3Z8/1vw6Hg3NMPShrUrv75aNmPTqs+uWwPGaBTPjJvge3XPfhkuUVp3V29 p2dePrTZWt3MYA54qC0nd1eZlrzdDkM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-605-Uzf8Tf3PNy2PcdE_MgiMuQ-1; Fri, 22 Sep 2023 08:02:50 -0400 X-MC-Unique: Uzf8Tf3PNy2PcdE_MgiMuQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 01A67101AA6F; Fri, 22 Sep 2023 12:02:50 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1F763492C37; Fri, 22 Sep 2023 12:02:48 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 07/13] iov_iter: Convert iterate*() to inline funcs Date: Fri, 22 Sep 2023 13:02:21 +0100 Message-ID: <20230922120227.1173720-8-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Convert the iov_iter iteration macros to inline functions to make the code easier to follow. The functions are marked __always_inline as we don't want to end up with indirect calls in the code. This, however, leaves dealing with ->copy_mc in an awkard situation since the step function (memcpy_from_iter_mc()) needs to test the flag in the iterator, but isn't passed the iterator. This will be dealt with in a follow-up patch. The variable names in the per-type iterator functions have been harmonised as much as possible and made clearer as to the variable purpose. The iterator functions are also moved to a header file so that other operations that need to scan over an iterator can be added. For instance, the rbd driver could use this to scan a buffer to see if it is all zeros and libceph could use this to generate a crc. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Link: https://lore.kernel.org/r/3710261.1691764329@warthog.procyon.org.uk/ # v1 Link: https://lore.kernel.org/r/855.1692047347@warthog.procyon.org.uk/ # v2 Link: https://lore.kernel.org/r/20230816120741.534415-1-dhowells@redhat.com/ # v3 --- Notes: Changes ======= ver #5) - Merge in patch to move iteration framework to a header file. - Move "iter->count - progress" into individual iteration subfunctions. include/linux/iov_iter.h | 274 ++++++++++++++++++++++++++ lib/iov_iter.c | 416 ++++++++++++++++----------------------- 2 files changed, 449 insertions(+), 241 deletions(-) create mode 100644 include/linux/iov_iter.h diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h new file mode 100644 index 000000000000..270454a6703d --- /dev/null +++ b/include/linux/iov_iter.h @@ -0,0 +1,274 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* I/O iterator iteration building functions. + * + * Copyright (C) 2023 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) + */ + +#ifndef _LINUX_IOV_ITER_H +#define _LINUX_IOV_ITER_H + +#include +#include + +typedef size_t (*iov_step_f)(void *iter_base, size_t progress, size_t len, + void *priv, void *priv2); +typedef size_t (*iov_ustep_f)(void __user *iter_base, size_t progress, size_t len, + void *priv, void *priv2); + +/* + * Handle ITER_UBUF. + */ +static __always_inline +size_t iterate_ubuf(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_ustep_f step) +{ + void __user *base = iter->ubuf; + size_t progress = 0, remain; + + remain = step(base + iter->iov_offset, 0, len, priv, priv2); + progress = len - remain; + iter->iov_offset += progress; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_IOVEC. + */ +static __always_inline +size_t iterate_iovec(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_ustep_f step) +{ + const struct iovec *p = iter->__iov; + size_t progress = 0, skip = iter->iov_offset; + + do { + size_t remain, consumed; + size_t part = min(len, p->iov_len - skip); + + if (likely(part)) { + remain = step(p->iov_base + skip, progress, part, priv, priv2); + consumed = part - remain; + progress += consumed; + skip += consumed; + len -= consumed; + if (skip < p->iov_len) + break; + } + p++; + skip = 0; + } while (len); + + iter->nr_segs -= p - iter->__iov; + iter->__iov = p; + iter->iov_offset = skip; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_KVEC. + */ +static __always_inline +size_t iterate_kvec(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + const struct kvec *p = iter->kvec; + size_t progress = 0, skip = iter->iov_offset; + + do { + size_t remain, consumed; + size_t part = min(len, p->iov_len - skip); + + if (likely(part)) { + remain = step(p->iov_base + skip, progress, part, priv, priv2); + consumed = part - remain; + progress += consumed; + skip += consumed; + len -= consumed; + if (skip < p->iov_len) + break; + } + p++; + skip = 0; + } while (len); + + iter->nr_segs -= p - iter->kvec; + iter->kvec = p; + iter->iov_offset = skip; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_BVEC. + */ +static __always_inline +size_t iterate_bvec(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + const struct bio_vec *p = iter->bvec; + size_t progress = 0, skip = iter->iov_offset; + + do { + size_t remain, consumed; + size_t offset = p->bv_offset + skip, part; + void *kaddr = kmap_local_page(p->bv_page + offset / PAGE_SIZE); + + part = min3(len, + (size_t)(p->bv_len - skip), + (size_t)(PAGE_SIZE - offset % PAGE_SIZE)); + remain = step(kaddr + offset % PAGE_SIZE, progress, part, priv, priv2); + kunmap_local(kaddr); + consumed = part - remain; + len -= consumed; + progress += consumed; + skip += consumed; + if (skip >= p->bv_len) { + skip = 0; + p++; + } + if (remain) + break; + } while (len); + + iter->nr_segs -= p - iter->bvec; + iter->bvec = p; + iter->iov_offset = skip; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_XARRAY. + */ +static __always_inline +size_t iterate_xarray(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + struct folio *folio; + size_t progress = 0; + loff_t start = iter->xarray_start + iter->iov_offset; + pgoff_t index = start / PAGE_SIZE; + XA_STATE(xas, iter->xarray, index); + + rcu_read_lock(); + xas_for_each(&xas, folio, ULONG_MAX) { + size_t remain, consumed, offset, part, flen; + + if (xas_retry(&xas, folio)) + continue; + if (WARN_ON(xa_is_value(folio))) + break; + if (WARN_ON(folio_test_hugetlb(folio))) + break; + + offset = offset_in_folio(folio, start + progress); + flen = min(folio_size(folio) - offset, len); + + while (flen) { + void *base = kmap_local_folio(folio, offset); + + part = min_t(size_t, flen, + PAGE_SIZE - offset_in_page(offset)); + remain = step(base, progress, part, priv, priv2); + kunmap_local(base); + + consumed = part - remain; + progress += consumed; + len -= consumed; + + if (remain || len == 0) + goto out; + flen -= consumed; + offset += consumed; + } + } + +out: + rcu_read_unlock(); + iter->iov_offset += progress; + iter->count -= progress; + return progress; +} + +/* + * Handle ITER_DISCARD. + */ +static __always_inline +size_t iterate_discard(struct iov_iter *iter, size_t len, void *priv, void *priv2, + iov_step_f step) +{ + size_t progress = len; + + iter->count -= progress; + return progress; +} + +/** + * iterate_and_advance2 - Iterate over an iterator + * @iter: The iterator to iterate over. + * @len: The amount to iterate over. + * @priv: Data for the step functions. + * @priv2: More data for the step functions. + * @ustep: Function for UBUF/IOVEC iterators; given __user addresses. + * @step: Function for other iterators; given kernel addresses. + * + * Iterate over the next part of an iterator, up to the specified length. The + * buffer is presented in segments, which for kernel iteration are broken up by + * physical pages and mapped, with the mapped address being presented. + * + * Two step functions, @step and @ustep, must be provided, one for handling + * mapped kernel addresses and the other is given user addresses which have the + * potential to fault since no pinning is performed. + * + * The step functions are passed the address and length of the segment, @priv, + * @priv2 and the amount of data so far iterated over (which can, for example, + * be added to @priv to point to the right part of a second buffer). The step + * functions should return the amount of the segment they didn't process (ie. 0 + * indicates complete processsing). + * + * This function returns the amount of data processed (ie. 0 means nothing was + * processed and the value of @len means processes to completion). + */ +static __always_inline +size_t iterate_and_advance2(struct iov_iter *iter, size_t len, void *priv, + void *priv2, iov_ustep_f ustep, iov_step_f step) +{ + if (unlikely(iter->count < len)) + len = iter->count; + if (unlikely(!len)) + return 0; + + if (likely(iter_is_ubuf(iter))) + return iterate_ubuf(iter, len, priv, priv2, ustep); + if (likely(iter_is_iovec(iter))) + return iterate_iovec(iter, len, priv, priv2, ustep); + if (iov_iter_is_bvec(iter)) + return iterate_bvec(iter, len, priv, priv2, step); + if (iov_iter_is_kvec(iter)) + return iterate_kvec(iter, len, priv, priv2, step); + if (iov_iter_is_xarray(iter)) + return iterate_xarray(iter, len, priv, priv2, step); + return iterate_discard(iter, len, priv, priv2, step); +} + +/** + * iterate_and_advance - Iterate over an iterator + * @iter: The iterator to iterate over. + * @len: The amount to iterate over. + * @priv: Data for the step functions. + * @ustep: Function for UBUF/IOVEC iterators; given __user addresses. + * @step: Function for other iterators; given kernel addresses. + * + * As iterate_and_advance2(), but priv2 is always NULL. + */ +static __always_inline +size_t iterate_and_advance(struct iov_iter *iter, size_t len, void *priv, + iov_ustep_f ustep, iov_step_f step) +{ + return iterate_and_advance2(iter, len, priv, NULL, ustep, step); +} + +#endif /* _LINUX_IOV_ITER_H */ diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 227c9f536b94..65374ee91ecd 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -13,189 +13,69 @@ #include #include #include +#include -/* covers ubuf and kbuf alike */ -#define iterate_buf(i, n, base, len, off, __p, STEP) { \ - size_t __maybe_unused off = 0; \ - len = n; \ - base = __p + i->iov_offset; \ - len -= (STEP); \ - i->iov_offset += len; \ - n = len; \ -} - -/* covers iovec and kvec alike */ -#define iterate_iovec(i, n, base, len, off, __p, STEP) { \ - size_t off = 0; \ - size_t skip = i->iov_offset; \ - do { \ - len = min(n, __p->iov_len - skip); \ - if (likely(len)) { \ - base = __p->iov_base + skip; \ - len -= (STEP); \ - off += len; \ - skip += len; \ - n -= len; \ - if (skip < __p->iov_len) \ - break; \ - } \ - __p++; \ - skip = 0; \ - } while (n); \ - i->iov_offset = skip; \ - n = off; \ -} - -#define iterate_bvec(i, n, base, len, off, p, STEP) { \ - size_t off = 0; \ - unsigned skip = i->iov_offset; \ - while (n) { \ - unsigned offset = p->bv_offset + skip; \ - unsigned left; \ - void *kaddr = kmap_local_page(p->bv_page + \ - offset / PAGE_SIZE); \ - base = kaddr + offset % PAGE_SIZE; \ - len = min(min(n, (size_t)(p->bv_len - skip)), \ - (size_t)(PAGE_SIZE - offset % PAGE_SIZE)); \ - left = (STEP); \ - kunmap_local(kaddr); \ - len -= left; \ - off += len; \ - skip += len; \ - if (skip == p->bv_len) { \ - skip = 0; \ - p++; \ - } \ - n -= len; \ - if (left) \ - break; \ - } \ - i->iov_offset = skip; \ - n = off; \ -} - -#define iterate_xarray(i, n, base, len, __off, STEP) { \ - __label__ __out; \ - size_t __off = 0; \ - struct folio *folio; \ - loff_t start = i->xarray_start + i->iov_offset; \ - pgoff_t index = start / PAGE_SIZE; \ - XA_STATE(xas, i->xarray, index); \ - \ - len = PAGE_SIZE - offset_in_page(start); \ - rcu_read_lock(); \ - xas_for_each(&xas, folio, ULONG_MAX) { \ - unsigned left; \ - size_t offset; \ - if (xas_retry(&xas, folio)) \ - continue; \ - if (WARN_ON(xa_is_value(folio))) \ - break; \ - if (WARN_ON(folio_test_hugetlb(folio))) \ - break; \ - offset = offset_in_folio(folio, start + __off); \ - while (offset < folio_size(folio)) { \ - base = kmap_local_folio(folio, offset); \ - len = min(n, len); \ - left = (STEP); \ - kunmap_local(base); \ - len -= left; \ - __off += len; \ - n -= len; \ - if (left || n == 0) \ - goto __out; \ - offset += len; \ - len = PAGE_SIZE; \ - } \ - } \ -__out: \ - rcu_read_unlock(); \ - i->iov_offset += __off; \ - n = __off; \ -} - -#define __iterate_and_advance(i, n, base, len, off, I, K) { \ - if (unlikely(i->count < n)) \ - n = i->count; \ - if (likely(n)) { \ - if (likely(iter_is_ubuf(i))) { \ - void __user *base; \ - size_t len; \ - iterate_buf(i, n, base, len, off, \ - i->ubuf, (I)) \ - } else if (likely(iter_is_iovec(i))) { \ - const struct iovec *iov = iter_iov(i); \ - void __user *base; \ - size_t len; \ - iterate_iovec(i, n, base, len, off, \ - iov, (I)) \ - i->nr_segs -= iov - iter_iov(i); \ - i->__iov = iov; \ - } else if (iov_iter_is_bvec(i)) { \ - const struct bio_vec *bvec = i->bvec; \ - void *base; \ - size_t len; \ - iterate_bvec(i, n, base, len, off, \ - bvec, (K)) \ - i->nr_segs -= bvec - i->bvec; \ - i->bvec = bvec; \ - } else if (iov_iter_is_kvec(i)) { \ - const struct kvec *kvec = i->kvec; \ - void *base; \ - size_t len; \ - iterate_iovec(i, n, base, len, off, \ - kvec, (K)) \ - i->nr_segs -= kvec - i->kvec; \ - i->kvec = kvec; \ - } else if (iov_iter_is_xarray(i)) { \ - void *base; \ - size_t len; \ - iterate_xarray(i, n, base, len, off, \ - (K)) \ - } \ - i->count -= n; \ - } \ -} -#define iterate_and_advance(i, n, base, len, off, I, K) \ - __iterate_and_advance(i, n, base, len, off, I, ((void)(K),0)) - -static int copyout(void __user *to, const void *from, size_t n) +static __always_inline +size_t copy_to_user_iter(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) { if (should_fail_usercopy()) - return n; - if (access_ok(to, n)) { - instrument_copy_to_user(to, from, n); - n = raw_copy_to_user(to, from, n); + return len; + if (access_ok(iter_to, len)) { + from += progress; + instrument_copy_to_user(iter_to, from, len); + len = raw_copy_to_user(iter_to, from, len); } - return n; + return len; } -static int copyout_nofault(void __user *to, const void *from, size_t n) +static __always_inline +size_t copy_to_user_iter_nofault(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) { - long res; + ssize_t res; if (should_fail_usercopy()) - return n; - - res = copy_to_user_nofault(to, from, n); + return len; - return res < 0 ? n : res; + from += progress; + res = copy_to_user_nofault(iter_to, from, len); + return res < 0 ? len : res; } -static int copyin(void *to, const void __user *from, size_t n) +static __always_inline +size_t copy_from_user_iter(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) { - size_t res = n; + size_t res = len; if (should_fail_usercopy()) - return n; - if (access_ok(from, n)) { - instrument_copy_from_user_before(to, from, n); - res = raw_copy_from_user(to, from, n); - instrument_copy_from_user_after(to, from, n, res); + return len; + if (access_ok(iter_from, len)) { + to += progress; + instrument_copy_from_user_before(to, iter_from, len); + res = raw_copy_from_user(to, iter_from, len); + instrument_copy_from_user_after(to, iter_from, len, res); } return res; } +static __always_inline +size_t memcpy_to_iter(void *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + memcpy(iter_to, from + progress, len); + return 0; +} + +static __always_inline +size_t memcpy_from_iter(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + memcpy(to + progress, iter_from, len); + return 0; +} + /* * fault_in_iov_iter_readable - fault in iov iterator for reading * @i: iterator @@ -312,23 +192,29 @@ size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) return 0; if (user_backed_iter(i)) might_fault(); - iterate_and_advance(i, bytes, base, len, off, - copyout(base, addr + off, len), - memcpy(base, addr + off, len) - ) - - return bytes; + return iterate_and_advance(i, bytes, (void *)addr, + copy_to_user_iter, memcpy_to_iter); } EXPORT_SYMBOL(_copy_to_iter); #ifdef CONFIG_ARCH_HAS_COPY_MC -static int copyout_mc(void __user *to, const void *from, size_t n) -{ - if (access_ok(to, n)) { - instrument_copy_to_user(to, from, n); - n = copy_mc_to_user((__force void *) to, from, n); +static __always_inline +size_t copy_to_user_iter_mc(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + if (access_ok(iter_to, len)) { + from += progress; + instrument_copy_to_user(iter_to, from, len); + len = copy_mc_to_user(iter_to, from, len); } - return n; + return len; +} + +static __always_inline +size_t memcpy_to_iter_mc(void *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + return copy_mc_to_kernel(iter_to, from + progress, len); } /** @@ -361,22 +247,20 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i) return 0; if (user_backed_iter(i)) might_fault(); - __iterate_and_advance(i, bytes, base, len, off, - copyout_mc(base, addr + off, len), - copy_mc_to_kernel(base, addr + off, len) - ) - - return bytes; + return iterate_and_advance(i, bytes, (void *)addr, + copy_to_user_iter_mc, memcpy_to_iter_mc); } EXPORT_SYMBOL_GPL(_copy_mc_to_iter); #endif /* CONFIG_ARCH_HAS_COPY_MC */ -static void *memcpy_from_iter(struct iov_iter *i, void *to, const void *from, - size_t size) +static size_t memcpy_from_iter_mc(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) { - if (iov_iter_is_copy_mc(i)) - return (void *)copy_mc_to_kernel(to, from, size); - return memcpy(to, from, size); + struct iov_iter *iter = priv2; + + if (iov_iter_is_copy_mc(iter)) + return copy_mc_to_kernel(to + progress, iter_from, len); + return memcpy_from_iter(iter_from, progress, len, to, priv2); } size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) @@ -386,30 +270,46 @@ size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) if (user_backed_iter(i)) might_fault(); - iterate_and_advance(i, bytes, base, len, off, - copyin(addr + off, base, len), - memcpy_from_iter(i, addr + off, base, len) - ) - - return bytes; + return iterate_and_advance2(i, bytes, addr, i, + copy_from_user_iter, + memcpy_from_iter_mc); } EXPORT_SYMBOL(_copy_from_iter); +static __always_inline +size_t copy_from_user_iter_nocache(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + return __copy_from_user_inatomic_nocache(to + progress, iter_from, len); +} + size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i) { if (WARN_ON_ONCE(!i->data_source)) return 0; - iterate_and_advance(i, bytes, base, len, off, - __copy_from_user_inatomic_nocache(addr + off, base, len), - memcpy(addr + off, base, len) - ) - - return bytes; + return iterate_and_advance(i, bytes, addr, + copy_from_user_iter_nocache, + memcpy_from_iter); } EXPORT_SYMBOL(_copy_from_iter_nocache); #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE +static __always_inline +size_t copy_from_user_iter_flushcache(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + return __copy_from_user_flushcache(to + progress, iter_from, len); +} + +static __always_inline +size_t memcpy_from_iter_flushcache(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + memcpy_flushcache(to + progress, iter_from, len); + return 0; +} + /** * _copy_from_iter_flushcache - write destination through cpu cache * @addr: destination kernel address @@ -431,12 +331,9 @@ size_t _copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i) if (WARN_ON_ONCE(!i->data_source)) return 0; - iterate_and_advance(i, bytes, base, len, off, - __copy_from_user_flushcache(addr + off, base, len), - memcpy_flushcache(addr + off, base, len) - ) - - return bytes; + return iterate_and_advance(i, bytes, addr, + copy_from_user_iter_flushcache, + memcpy_from_iter_flushcache); } EXPORT_SYMBOL_GPL(_copy_from_iter_flushcache); #endif @@ -508,10 +405,9 @@ size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, size_t byte void *kaddr = kmap_local_page(page); size_t n = min(bytes, (size_t)PAGE_SIZE - offset); - iterate_and_advance(i, n, base, len, off, - copyout_nofault(base, kaddr + offset + off, len), - memcpy(base, kaddr + offset + off, len) - ) + n = iterate_and_advance(i, bytes, kaddr, + copy_to_user_iter_nofault, + memcpy_to_iter); kunmap_local(kaddr); res += n; bytes -= n; @@ -554,14 +450,25 @@ size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes, } EXPORT_SYMBOL(copy_page_from_iter); -size_t iov_iter_zero(size_t bytes, struct iov_iter *i) +static __always_inline +size_t zero_to_user_iter(void __user *iter_to, size_t progress, + size_t len, void *priv, void *priv2) { - iterate_and_advance(i, bytes, base, len, count, - clear_user(base, len), - memset(base, 0, len) - ) + return clear_user(iter_to, len); +} - return bytes; +static __always_inline +size_t zero_to_iter(void *iter_to, size_t progress, + size_t len, void *priv, void *priv2) +{ + memset(iter_to, 0, len); + return 0; +} + +size_t iov_iter_zero(size_t bytes, struct iov_iter *i) +{ + return iterate_and_advance(i, bytes, NULL, + zero_to_user_iter, zero_to_iter); } EXPORT_SYMBOL(iov_iter_zero); @@ -586,10 +493,9 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset, } p = kmap_atomic(page) + offset; - iterate_and_advance(i, n, base, len, off, - copyin(p + off, base, len), - memcpy_from_iter(i, p + off, base, len) - ) + n = iterate_and_advance2(i, n, p, i, + copy_from_user_iter, + memcpy_from_iter_mc); kunmap_atomic(p); copied += n; offset += n; @@ -1180,32 +1086,64 @@ ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages_alloc2); +static __always_inline +size_t copy_from_user_iter_csum(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + __wsum next, *csum = priv2; + + next = csum_and_copy_from_user(iter_from, to + progress, len); + *csum = csum_block_add(*csum, next, progress); + return next ? 0 : len; +} + +static __always_inline +size_t memcpy_from_iter_csum(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + __wsum *csum = priv2; + + *csum = csum_and_memcpy(to + progress, iter_from, len, *csum, progress); + return 0; +} + size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i) { - __wsum sum, next; - sum = *csum; if (WARN_ON_ONCE(!i->data_source)) return 0; - - iterate_and_advance(i, bytes, base, len, off, ({ - next = csum_and_copy_from_user(base, addr + off, len); - sum = csum_block_add(sum, next, off); - next ? 0 : len; - }), ({ - sum = csum_and_memcpy(addr + off, base, len, sum, off); - }) - ) - *csum = sum; - return bytes; + return iterate_and_advance2(i, bytes, addr, csum, + copy_from_user_iter_csum, + memcpy_from_iter_csum); } EXPORT_SYMBOL(csum_and_copy_from_iter); +static __always_inline +size_t copy_to_user_iter_csum(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + __wsum next, *csum = priv2; + + next = csum_and_copy_to_user(from + progress, iter_to, len); + *csum = csum_block_add(*csum, next, progress); + return next ? 0 : len; +} + +static __always_inline +size_t memcpy_to_iter_csum(void *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + __wsum *csum = priv2; + + *csum = csum_and_memcpy(iter_to, from + progress, len, *csum, progress); + return 0; +} + size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, struct iov_iter *i) { struct csum_state *csstate = _csstate; - __wsum sum, next; + __wsum sum; if (WARN_ON_ONCE(i->data_source)) return 0; @@ -1219,14 +1157,10 @@ size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, } sum = csum_shift(csstate->csum, csstate->off); - iterate_and_advance(i, bytes, base, len, off, ({ - next = csum_and_copy_to_user(addr + off, base, len); - sum = csum_block_add(sum, next, off); - next ? 0 : len; - }), ({ - sum = csum_and_memcpy(base, addr + off, len, sum, off); - }) - ) + + bytes = iterate_and_advance2(i, bytes, (void *)addr, &sum, + copy_to_user_iter_csum, + memcpy_to_iter_csum); csstate->csum = csum_shift(sum, csstate->off); csstate->off += bytes; return bytes; From patchwork Fri Sep 22 12:02:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA973E70715 for ; Fri, 22 Sep 2023 12:04:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233973AbjIVMEr (ORCPT ); Fri, 22 Sep 2023 08:04:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234054AbjIVMEY (ORCPT ); Fri, 22 Sep 2023 08:04:24 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0F7D21B7 for ; Fri, 22 Sep 2023 05:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384176; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gfSLwCbdFMwywMZPFTO5hx480mND9O9O9B3C6PwfVhs=; b=P1qMWwmPmN/V9FjtYqnMm4Y/iRJskaGy/6Z0rh9hPDVcXVejdtpghyAonsEoZXl5/AHUJv UctOoBZzYLOazBMz1fyJ7ICHAOEB1H/bDjjYWigYTGENaZOr29kUrsOzBAFoYuoenR0s8a Tdg7ad35FGORfwreORPM0oE3rixKVC8= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-136-d7Mhfh32MBiW5Kow5NUZRQ-1; Fri, 22 Sep 2023 08:02:53 -0400 X-MC-Unique: d7Mhfh32MBiW5Kow5NUZRQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 533BA1C0172F; Fri, 22 Sep 2023 12:02:52 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id ADFF220268D6; Fri, 22 Sep 2023 12:02:50 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 08/13] iov_iter: Don't deal with iter->copy_mc in memcpy_from_iter_mc() Date: Fri, 22 Sep 2023 13:02:22 +0100 Message-ID: <20230922120227.1173720-9-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org iter->copy_mc is only used with a bvec iterator and only by dump_emit_page() in fs/coredump.c so rather than handle this in memcpy_from_iter_mc() where it is checked repeatedly by _copy_from_iter() and copy_page_from_iter_atomic(), --- lib/iov_iter.c | 39 +++++++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 65374ee91ecd..943aa3cfd7b3 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -253,14 +253,33 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i) EXPORT_SYMBOL_GPL(_copy_mc_to_iter); #endif /* CONFIG_ARCH_HAS_COPY_MC */ -static size_t memcpy_from_iter_mc(void *iter_from, size_t progress, - size_t len, void *to, void *priv2) +static __always_inline +size_t memcpy_from_iter_mc(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + return copy_mc_to_kernel(to + progress, iter_from, len); +} + +static size_t __copy_from_iter_mc(void *addr, size_t bytes, struct iov_iter *i) { - struct iov_iter *iter = priv2; + size_t progress; - if (iov_iter_is_copy_mc(iter)) - return copy_mc_to_kernel(to + progress, iter_from, len); - return memcpy_from_iter(iter_from, progress, len, to, priv2); + if (unlikely(i->count < bytes)) + bytes = i->count; + if (unlikely(!bytes)) + return 0; + progress = iterate_bvec(i, bytes, addr, NULL, memcpy_from_iter_mc); + i->count -= progress; + return progress; +} + +static __always_inline +size_t __copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) +{ + if (unlikely(iov_iter_is_copy_mc(i))) + return __copy_from_iter_mc(addr, bytes, i); + return iterate_and_advance(i, bytes, addr, + copy_from_user_iter, memcpy_from_iter); } size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) @@ -270,9 +289,7 @@ size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i) if (user_backed_iter(i)) might_fault(); - return iterate_and_advance2(i, bytes, addr, i, - copy_from_user_iter, - memcpy_from_iter_mc); + return __copy_from_iter(addr, bytes, i); } EXPORT_SYMBOL(_copy_from_iter); @@ -493,9 +510,7 @@ size_t copy_page_from_iter_atomic(struct page *page, size_t offset, } p = kmap_atomic(page) + offset; - n = iterate_and_advance2(i, n, p, i, - copy_from_user_iter, - memcpy_from_iter_mc); + __copy_from_iter(p, n, i); kunmap_atomic(p); copied += n; offset += n; From patchwork Fri Sep 22 12:02:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395749 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E339FCE79A8 for ; Fri, 22 Sep 2023 12:04:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234033AbjIVMEr (ORCPT ); Fri, 22 Sep 2023 08:04:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234080AbjIVMEZ (ORCPT ); Fri, 22 Sep 2023 08:04:25 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAD9A1BD for ; Fri, 22 Sep 2023 05:02:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384177; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hYOoT+N3+ZXg5kUOP5JOmaiFuOq7EbJgQKHzR3cvg3M=; b=TX+Ih5LqWbej/XjcOyADyGErIJluqitozeJH8HGjdHlu5rj6VuaGJPb0gxPoDN42xTsqeV YwCmSH97ITVNr9uKEDMPRd4y5I2ld8+W3hrI3uBFr4HigYSN57yOgBAxDKZR0bT+/26+Qt qUJFBxo4/tR9aDOEI59tIMIMY951Xtk= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-619-ZTcYAYuYMcqv6NQGNev8hg-1; Fri, 22 Sep 2023 08:02:55 -0400 X-MC-Unique: ZTcYAYuYMcqv6NQGNev8hg-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9DF423816C89; Fri, 22 Sep 2023 12:02:54 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id EFD0D711282; Fri, 22 Sep 2023 12:02:52 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 09/13] iov_iter: Add a kernel-type iterator-only iteration function Date: Fri, 22 Sep 2023 13:02:23 +0100 Message-ID: <20230922120227.1173720-10-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add an iteration function that can only iterate over kernel internal-type iterators (ie. BVEC, KVEC, XARRAY) and not user-backed iterators (ie. UBUF and IOVEC). This allows for smaller iterators to be built when it is known the caller won't have a user-backed iterator. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org --- Notes: Changes ======= ver #6) - Document the priv2 arg of iterate_and_advance_kernel(). include/linux/iov_iter.h | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h index 270454a6703d..d8733dc22b54 100644 --- a/include/linux/iov_iter.h +++ b/include/linux/iov_iter.h @@ -271,4 +271,36 @@ size_t iterate_and_advance(struct iov_iter *iter, size_t len, void *priv, return iterate_and_advance2(iter, len, priv, NULL, ustep, step); } +/** + * iterate_and_advance_kernel - Iterate over a kernel iterator + * @iter: The iterator to iterate over. + * @len: The amount to iterate over. + * @priv: Data for the step functions. + * @priv2: More data for the step functions. + * @step: Processing function; given kernel addresses. + * + * Like iterate_and_advance2(), but rejected UBUF and IOVEC iterators and does + * not take a user-step function. + */ +static __always_inline +size_t iterate_and_advance_kernel(struct iov_iter *iter, size_t len, void *priv, + void *priv2, iov_step_f step) +{ + if (unlikely(iter->count < len)) + len = iter->count; + if (unlikely(!len)) + return 0; + + if (iov_iter_is_bvec(iter)) + return iterate_bvec(iter, len, priv, priv2, step); + if (iov_iter_is_kvec(iter)) + return iterate_kvec(iter, len, priv, priv2, step); + if (iov_iter_is_xarray(iter)) + return iterate_xarray(iter, len, priv, priv2, step); + if (iov_iter_is_discard(iter)) + return iterate_discard(iter, len, priv, priv2, step); + WARN_ON_ONCE(1); + return 0; +} + #endif /* _LINUX_IOV_ITER_H */ From patchwork Fri Sep 22 12:02:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395756 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9EEBCD4942 for ; Fri, 22 Sep 2023 12:16:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233656AbjIVMQ2 (ORCPT ); Fri, 22 Sep 2023 08:16:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234101AbjIVME0 (ORCPT ); Fri, 22 Sep 2023 08:04:26 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6FA0CD2 for ; Fri, 22 Sep 2023 05:03:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wIy5c31/DaOl6Dg3RGFXufcZuyPPQA8lBpT8O0G10n8=; b=bFsBmUCa1r8NpUgT247hmNYi6wWG52UL3u6kA16Qza2Av5rbG1ybjzjHGDL/sfcdhCjFzN KkkVE7jv5rHbAyu+/C4GusCP6D8qsC7kaAAmv6oreF5XlUUuTHGIeLwqaO/zGKTKQ9398I 0NwfXI84zH0GrgYgcUPQYkuuniHl8ZY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-440-pc8cXPA_OZOHFZz0kV5OcQ-1; Fri, 22 Sep 2023 08:02:57 -0400 X-MC-Unique: pc8cXPA_OZOHFZz0kV5OcQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EA1881814507; Fri, 22 Sep 2023 12:02:56 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4799F40C6EBF; Fri, 22 Sep 2023 12:02:55 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 10/13] iov_iter, net: Move csum_and_copy_to/from_iter() to net/ Date: Fri, 22 Sep 2023 13:02:24 +0100 Message-ID: <20230922120227.1173720-11-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move csum_and_copy_to/from_iter() to net code now that the iteration framework can be #included. Signed-off-by: David Howells --- include/linux/skbuff.h | 25 ++++++++++++ include/linux/uio.h | 18 --------- lib/iov_iter.c | 89 ------------------------------------------ net/core/datagram.c | 50 +++++++++++++++++++++++- net/core/skbuff.c | 33 ++++++++++++++++ 5 files changed, 107 insertions(+), 108 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 4174c4b82d13..d0656cc11c16 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3679,6 +3679,31 @@ static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int l return __skb_put_padto(skb, len, true); } +static inline __wsum csum_and_memcpy(void *to, const void *from, size_t len, + __wsum sum, size_t off) +{ + __wsum next = csum_partial_copy_nocheck(from, to, len); + return csum_block_add(sum, next, off); +} + +struct csum_state { + __wsum csum; + size_t off; +}; + +size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i); + +static __always_inline __must_check +bool csum_and_copy_from_iter_full(void *addr, size_t bytes, + __wsum *csum, struct iov_iter *i) +{ + size_t copied = csum_and_copy_from_iter(addr, bytes, csum, i); + if (likely(copied == bytes)) + return true; + iov_iter_revert(i, copied); + return false; +} + static inline int skb_add_data(struct sk_buff *skb, struct iov_iter *from, int copy) { diff --git a/include/linux/uio.h b/include/linux/uio.h index 65d9143f83c8..0a5426c97e02 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -338,24 +338,6 @@ iov_iter_npages_cap(struct iov_iter *i, int maxpages, size_t max_bytes) return npages; } -struct csum_state { - __wsum csum; - size_t off; -}; - -size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *csstate, struct iov_iter *i); -size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i); - -static __always_inline __must_check -bool csum_and_copy_from_iter_full(void *addr, size_t bytes, - __wsum *csum, struct iov_iter *i) -{ - size_t copied = csum_and_copy_from_iter(addr, bytes, csum, i); - if (likely(copied == bytes)) - return true; - iov_iter_revert(i, copied); - return false; -} size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, struct iov_iter *i); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 943aa3cfd7b3..fef934a8745d 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -10,7 +10,6 @@ #include #include #include -#include #include #include #include @@ -179,13 +178,6 @@ void iov_iter_init(struct iov_iter *i, unsigned int direction, } EXPORT_SYMBOL(iov_iter_init); -static __wsum csum_and_memcpy(void *to, const void *from, size_t len, - __wsum sum, size_t off) -{ - __wsum next = csum_partial_copy_nocheck(from, to, len); - return csum_block_add(sum, next, off); -} - size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) { if (WARN_ON_ONCE(i->data_source)) @@ -1101,87 +1093,6 @@ ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages_alloc2); -static __always_inline -size_t copy_from_user_iter_csum(void __user *iter_from, size_t progress, - size_t len, void *to, void *priv2) -{ - __wsum next, *csum = priv2; - - next = csum_and_copy_from_user(iter_from, to + progress, len); - *csum = csum_block_add(*csum, next, progress); - return next ? 0 : len; -} - -static __always_inline -size_t memcpy_from_iter_csum(void *iter_from, size_t progress, - size_t len, void *to, void *priv2) -{ - __wsum *csum = priv2; - - *csum = csum_and_memcpy(to + progress, iter_from, len, *csum, progress); - return 0; -} - -size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, - struct iov_iter *i) -{ - if (WARN_ON_ONCE(!i->data_source)) - return 0; - return iterate_and_advance2(i, bytes, addr, csum, - copy_from_user_iter_csum, - memcpy_from_iter_csum); -} -EXPORT_SYMBOL(csum_and_copy_from_iter); - -static __always_inline -size_t copy_to_user_iter_csum(void __user *iter_to, size_t progress, - size_t len, void *from, void *priv2) -{ - __wsum next, *csum = priv2; - - next = csum_and_copy_to_user(from + progress, iter_to, len); - *csum = csum_block_add(*csum, next, progress); - return next ? 0 : len; -} - -static __always_inline -size_t memcpy_to_iter_csum(void *iter_to, size_t progress, - size_t len, void *from, void *priv2) -{ - __wsum *csum = priv2; - - *csum = csum_and_memcpy(iter_to, from + progress, len, *csum, progress); - return 0; -} - -size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, - struct iov_iter *i) -{ - struct csum_state *csstate = _csstate; - __wsum sum; - - if (WARN_ON_ONCE(i->data_source)) - return 0; - if (unlikely(iov_iter_is_discard(i))) { - // can't use csum_memcpy() for that one - data is not copied - csstate->csum = csum_block_add(csstate->csum, - csum_partial(addr, bytes, 0), - csstate->off); - csstate->off += bytes; - return bytes; - } - - sum = csum_shift(csstate->csum, csstate->off); - - bytes = iterate_and_advance2(i, bytes, (void *)addr, &sum, - copy_to_user_iter_csum, - memcpy_to_iter_csum); - csstate->csum = csum_shift(sum, csstate->off); - csstate->off += bytes; - return bytes; -} -EXPORT_SYMBOL(csum_and_copy_to_iter); - size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, struct iov_iter *i) { diff --git a/net/core/datagram.c b/net/core/datagram.c index 176eb5834746..37c89d0933b7 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -50,7 +50,7 @@ #include #include #include -#include +#include #include #include @@ -716,6 +716,54 @@ int zerocopy_sg_from_iter(struct sk_buff *skb, struct iov_iter *from) } EXPORT_SYMBOL(zerocopy_sg_from_iter); +static __always_inline +size_t copy_to_user_iter_csum(void __user *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + __wsum next, *csum = priv2; + + next = csum_and_copy_to_user(from + progress, iter_to, len); + *csum = csum_block_add(*csum, next, progress); + return next ? 0 : len; +} + +static __always_inline +size_t memcpy_to_iter_csum(void *iter_to, size_t progress, + size_t len, void *from, void *priv2) +{ + __wsum *csum = priv2; + + *csum = csum_and_memcpy(iter_to, from + progress, len, *csum, progress); + return 0; +} + +static size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, + struct iov_iter *i) +{ + struct csum_state *csstate = _csstate; + __wsum sum; + + if (WARN_ON_ONCE(i->data_source)) + return 0; + if (unlikely(iov_iter_is_discard(i))) { + // can't use csum_memcpy() for that one - data is not copied + csstate->csum = csum_block_add(csstate->csum, + csum_partial(addr, bytes, 0), + csstate->off); + csstate->off += bytes; + return bytes; + } + + sum = csum_shift(csstate->csum, csstate->off); + + bytes = iterate_and_advance2(i, bytes, (void *)addr, &sum, + copy_to_user_iter_csum, + memcpy_to_iter_csum); + csstate->csum = csum_shift(sum, csstate->off); + csstate->off += bytes; + return bytes; +} + /** * skb_copy_and_csum_datagram - Copy datagram to an iovec iterator * and update a checksum. diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 4eaf7ed0d1f4..5dbdfce2d05f 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -62,6 +62,7 @@ #include #include #include +#include #include #include @@ -6931,3 +6932,35 @@ ssize_t skb_splice_from_iter(struct sk_buff *skb, struct iov_iter *iter, return spliced ?: ret; } EXPORT_SYMBOL(skb_splice_from_iter); + +static __always_inline +size_t memcpy_from_iter_csum(void *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + __wsum *csum = priv2; + + *csum = csum_and_memcpy(to + progress, iter_from, len, *csum, progress); + return 0; +} + +static __always_inline +size_t copy_from_user_iter_csum(void __user *iter_from, size_t progress, + size_t len, void *to, void *priv2) +{ + __wsum next, *csum = priv2; + + next = csum_and_copy_from_user(iter_from, to + progress, len); + *csum = csum_block_add(*csum, next, progress); + return next ? 0 : len; +} + +size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, + struct iov_iter *i) +{ + if (WARN_ON_ONCE(!i->data_source)) + return 0; + return iterate_and_advance2(i, bytes, addr, csum, + copy_from_user_iter_csum, + memcpy_from_iter_csum); +} +EXPORT_SYMBOL(csum_and_copy_from_iter); From patchwork Fri Sep 22 12:02:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395755 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D3E9C04AAB for ; Fri, 22 Sep 2023 12:16:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233632AbjIVMQ2 (ORCPT ); Fri, 22 Sep 2023 08:16:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234105AbjIVME0 (ORCPT ); Fri, 22 Sep 2023 08:04:26 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71450CDB for ; Fri, 22 Sep 2023 05:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384187; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i+29vAFZrXF9naWng0ipN/2z31gEil+IMifE0VymLF4=; b=EcjndrneaDCzFMJP+nm2WTSUbeivCV2F2X6cdro+I5werZF1x9LyYc8qaQDrehYZTi20iO R+i0x3RkApsBypLQfMoxazN6kUIsi0Gzo6pXGmY1jPJVEqZ6RAQddEOxJB6ak6BNd0922y 18p8wjL9fP6UCy3d46iaDCT9uMMPFtk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-142-CaRM41xMMG6W6r4YEJltnA-1; Fri, 22 Sep 2023 08:03:00 -0400 X-MC-Unique: CaRM41xMMG6W6r4YEJltnA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B3A55811E96; Fri, 22 Sep 2023 12:02:59 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id AA91D20268D6; Fri, 22 Sep 2023 12:02:57 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v6 11/13] iov_iter, net: Fold in csum_and_memcpy() Date: Fri, 22 Sep 2023 13:02:25 +0100 Message-ID: <20230922120227.1173720-12-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Fold csum_and_memcpy() in to its callers. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- include/linux/skbuff.h | 7 ------- net/core/datagram.c | 3 ++- net/core/skbuff.c | 3 ++- 3 files changed, 4 insertions(+), 9 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index d0656cc11c16..c81ef5d76953 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3679,13 +3679,6 @@ static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int l return __skb_put_padto(skb, len, true); } -static inline __wsum csum_and_memcpy(void *to, const void *from, size_t len, - __wsum sum, size_t off) -{ - __wsum next = csum_partial_copy_nocheck(from, to, len); - return csum_block_add(sum, next, off); -} - struct csum_state { __wsum csum; size_t off; diff --git a/net/core/datagram.c b/net/core/datagram.c index 37c89d0933b7..452620dd41e4 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -732,8 +732,9 @@ size_t memcpy_to_iter_csum(void *iter_to, size_t progress, size_t len, void *from, void *priv2) { __wsum *csum = priv2; + __wsum next = csum_partial_copy_nocheck(from, iter_to, len); - *csum = csum_and_memcpy(iter_to, from + progress, len, *csum, progress); + *csum = csum_block_add(*csum, next, progress); return 0; } diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 5dbdfce2d05f..3efed86321db 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6938,8 +6938,9 @@ size_t memcpy_from_iter_csum(void *iter_from, size_t progress, size_t len, void *to, void *priv2) { __wsum *csum = priv2; + __wsum next = csum_partial_copy_nocheck(iter_from, to + progress, len); - *csum = csum_and_memcpy(to + progress, iter_from, len, *csum, progress); + *csum = csum_block_add(*csum, next, progress); return 0; } From patchwork Fri Sep 22 12:02:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395750 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 967A9E71098 for ; Fri, 22 Sep 2023 12:04:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234051AbjIVMEs (ORCPT ); Fri, 22 Sep 2023 08:04:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234113AbjIVME3 (ORCPT ); Fri, 22 Sep 2023 08:04:29 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E9D9CCE for ; Fri, 22 Sep 2023 05:03:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384186; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OEnm3txfdKWMLGG8rrdFEcQBV2WWgkp8Kq4P7DhBB1I=; b=F0OCdj/wSvqq/K4NgVG/TscZe6znddS0qPCRU0+X0gsWq5ZgRaZVbhlG//2FSXBHYtWUwW V5x+oWmn7oKqGw6IxHzILZctC02AW9bCBk6ZTik32ANylR82IDe4ooXiSGPRcXMI6VGmkB 4f/biZsylU0IsSDYoJVinTiXP85Froo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-342-_gvf6TzuMjGxNRBZi_i66w-1; Fri, 22 Sep 2023 08:03:03 -0400 X-MC-Unique: _gvf6TzuMjGxNRBZi_i66w-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6BC2B185A790; Fri, 22 Sep 2023 12:03:02 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5A4EC20268D7; Fri, 22 Sep 2023 12:03:00 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v6 12/13] iov_iter, net: Merge csum_and_copy_from_iter{,_full}() together Date: Fri, 22 Sep 2023 13:02:26 +0100 Message-ID: <20230922120227.1173720-13-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move csum_and_copy_from_iter_full() out of line and then merge csum_and_copy_from_iter() into its only caller. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- include/linux/skbuff.h | 19 ++----------------- net/core/datagram.c | 5 +++++ net/core/skbuff.c | 20 +++++++++++++------- 3 files changed, 20 insertions(+), 24 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index c81ef5d76953..be402f55f6d6 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3679,23 +3679,8 @@ static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int l return __skb_put_padto(skb, len, true); } -struct csum_state { - __wsum csum; - size_t off; -}; - -size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i); - -static __always_inline __must_check -bool csum_and_copy_from_iter_full(void *addr, size_t bytes, - __wsum *csum, struct iov_iter *i) -{ - size_t copied = csum_and_copy_from_iter(addr, bytes, csum, i); - if (likely(copied == bytes)) - return true; - iov_iter_revert(i, copied); - return false; -} +bool csum_and_copy_from_iter_full(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i) + __must_check; static inline int skb_add_data(struct sk_buff *skb, struct iov_iter *from, int copy) diff --git a/net/core/datagram.c b/net/core/datagram.c index 452620dd41e4..722311eeee18 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -738,6 +738,11 @@ size_t memcpy_to_iter_csum(void *iter_to, size_t progress, return 0; } +struct csum_state { + __wsum csum; + size_t off; +}; + static size_t csum_and_copy_to_iter(const void *addr, size_t bytes, void *_csstate, struct iov_iter *i) { diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 3efed86321db..2bfa6a7ba244 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -6955,13 +6955,19 @@ size_t copy_from_user_iter_csum(void __user *iter_from, size_t progress, return next ? 0 : len; } -size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, - struct iov_iter *i) +bool csum_and_copy_from_iter_full(void *addr, size_t bytes, + __wsum *csum, struct iov_iter *i) { + size_t copied; + if (WARN_ON_ONCE(!i->data_source)) - return 0; - return iterate_and_advance2(i, bytes, addr, csum, - copy_from_user_iter_csum, - memcpy_from_iter_csum); + return false; + copied = iterate_and_advance2(i, bytes, addr, csum, + copy_from_user_iter_csum, + memcpy_from_iter_csum); + if (likely(copied == bytes)) + return true; + iov_iter_revert(i, copied); + return false; } -EXPORT_SYMBOL(csum_and_copy_from_iter); +EXPORT_SYMBOL(csum_and_copy_from_iter_full); From patchwork Fri Sep 22 12:02:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13395751 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07FCACD4F5B for ; Fri, 22 Sep 2023 12:05:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234222AbjIVMF1 (ORCPT ); Fri, 22 Sep 2023 08:05:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234145AbjIVMEk (ORCPT ); Fri, 22 Sep 2023 08:04:40 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 361C6CDE for ; Fri, 22 Sep 2023 05:03:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1695384190; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IFANTKBqjctjMhYlR9DlpaXTtUIzXQ6esPI4cVawB1c=; b=ehD/ZbJ3gJOURFhVZG34QQc4k3yoS33IXRA+9fe9rrrRpJIeMGem8nqZKsjOiC584DwgbK 2e9XhPPtwJysYpEIqzAotVnwGyoLG4UdtrxT3rfcEozVRlwj2+Nw1BGiiLDSkSsSZP2Te3 NeST/csFOJZ2GYpkVGLJbTFWg5Weuqs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-453-H4_9D-FgOQa1s3eUdZamVw-1; Fri, 22 Sep 2023 08:03:06 -0400 X-MC-Unique: H4_9D-FgOQa1s3eUdZamVw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3B74285A5BA; Fri, 22 Sep 2023 12:03:05 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 25F1420268D7; Fri, 22 Sep 2023 12:03:03 +0000 (UTC) From: David Howells To: Jens Axboe Cc: David Howells , Al Viro , Linus Torvalds , Christoph Hellwig , Christian Brauner , David Laight , Matthew Wilcox , Jeff Layton , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v6 13/13] iov_iter, net: Move hash_and_copy_to_iter() to net/ Date: Fri, 22 Sep 2023 13:02:27 +0100 Message-ID: <20230922120227.1173720-14-dhowells@redhat.com> In-Reply-To: <20230922120227.1173720-1-dhowells@redhat.com> References: <20230922120227.1173720-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Move hash_and_copy_to_iter() to be with its only caller in networking code. Signed-off-by: David Howells cc: Alexander Viro cc: Jens Axboe cc: Christoph Hellwig cc: Christian Brauner cc: Matthew Wilcox cc: Linus Torvalds cc: David Laight cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-block@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: netdev@vger.kernel.org --- include/linux/uio.h | 3 --- lib/iov_iter.c | 20 -------------------- net/core/datagram.c | 19 +++++++++++++++++++ 3 files changed, 19 insertions(+), 23 deletions(-) diff --git a/include/linux/uio.h b/include/linux/uio.h index 0a5426c97e02..b6214cbf2a43 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -338,9 +338,6 @@ iov_iter_npages_cap(struct iov_iter *i, int maxpages, size_t max_bytes) return npages; } -size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, - struct iov_iter *i); - struct iovec *iovec_from_user(const struct iovec __user *uvector, unsigned long nr_segs, unsigned long fast_segs, struct iovec *fast_iov, bool compat); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index fef934a8745d..2547c96d56c7 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1,5 +1,4 @@ // SPDX-License-Identifier: GPL-2.0-only -#include #include #include #include @@ -1093,25 +1092,6 @@ ssize_t iov_iter_get_pages_alloc2(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages_alloc2); -size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, - struct iov_iter *i) -{ -#ifdef CONFIG_CRYPTO_HASH - struct ahash_request *hash = hashp; - struct scatterlist sg; - size_t copied; - - copied = copy_to_iter(addr, bytes, i); - sg_init_one(&sg, addr, copied); - ahash_request_set_crypt(hash, &sg, NULL, copied); - crypto_ahash_update(hash); - return copied; -#else - return 0; -#endif -} -EXPORT_SYMBOL(hash_and_copy_to_iter); - static int iov_npages(const struct iov_iter *i, int maxpages) { size_t skip = i->iov_offset, size = i->count; diff --git a/net/core/datagram.c b/net/core/datagram.c index 722311eeee18..103d46fa0eeb 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -61,6 +61,7 @@ #include #include #include +#include /* * Is a socket 'connection oriented' ? @@ -489,6 +490,24 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset, return 0; } +static size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp, + struct iov_iter *i) +{ +#ifdef CONFIG_CRYPTO_HASH + struct ahash_request *hash = hashp; + struct scatterlist sg; + size_t copied; + + copied = copy_to_iter(addr, bytes, i); + sg_init_one(&sg, addr, copied); + ahash_request_set_crypt(hash, &sg, NULL, copied); + crypto_ahash_update(hash); + return copied; +#else + return 0; +#endif +} + /** * skb_copy_and_hash_datagram_iter - Copy datagram to an iovec iterator * and update a hash.