From patchwork Fri Oct 23 19:32:15 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Schumaker, Anna" X-Patchwork-Id: 7478011 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id B9CDF9F302 for ; Fri, 23 Oct 2015 19:32:39 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DA45020A16 for ; Fri, 23 Oct 2015 19:32:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E595820A18 for ; Fri, 23 Oct 2015 19:32:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752588AbbJWTcb (ORCPT ); Fri, 23 Oct 2015 15:32:31 -0400 Received: from mx141.netapp.com ([216.240.21.12]:27918 "EHLO mx141.netapp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752455AbbJWTc2 (ORCPT ); Fri, 23 Oct 2015 15:32:28 -0400 X-IronPort-AV: E=Sophos;i="5.20,188,1444719600"; d="scan'208";a="76799214" Received: from vmwexchts01-prd.hq.netapp.com ([10.122.105.12]) by mx141-out.netapp.com with ESMTP; 23 Oct 2015 12:32:28 -0700 Received: from smtp2.corp.netapp.com (10.57.159.114) by VMWEXCHTS01-PRD.hq.netapp.com (10.122.105.12) with Microsoft SMTP Server id 15.0.1104.5; Fri, 23 Oct 2015 12:32:27 -0700 Received: from davros.com ([10.63.233.247]) by smtp2.corp.netapp.com (8.13.1/8.13.1/NTAP-1.6) with ESMTP id t9NJWIZU011568; Fri, 23 Oct 2015 12:32:26 -0700 (PDT) From: Anna Schumaker To: , , , , , , , , , , Subject: [PATCH v7 4/4] vfs: Add vfs_copy_file_range() support for pagecache copies Date: Fri, 23 Oct 2015 15:32:15 -0400 Message-ID: <1445628736-13058-5-git-send-email-Anna.Schumaker@Netapp.com> X-Mailer: git-send-email 2.6.2 In-Reply-To: <1445628736-13058-1-git-send-email-Anna.Schumaker@Netapp.com> References: <1445628736-13058-1-git-send-email-Anna.Schumaker@Netapp.com> MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This allows us to have an in-kernel copy mechanism that avoids frequent switches between kernel and user space. This is especially useful so NFSD can support server-side copies. Let's first check if the filesystem supports any kind of copy acceleration, but fall back on copying through the pagecache. I moved the rw_verify_area() calls into the fallback code since some filesystems can handle reflinking a large range. Signed-off-by: Anna Schumaker Reviewed-by: Darrick J. Wong Reviewed-by: Padraig Brady --- v7: - Remove checks for COPY_FR_REFLINK --- fs/read_write.c | 36 ++++++++++++++++++++++++++---------- 1 file changed, 26 insertions(+), 10 deletions(-) diff --git a/fs/read_write.c b/fs/read_write.c index 89c9e65..d2da7e4 100644 --- a/fs/read_write.c +++ b/fs/read_write.c @@ -1329,6 +1329,24 @@ COMPAT_SYSCALL_DEFINE4(sendfile64, int, out_fd, int, in_fd, } #endif +static ssize_t vfs_copy_fr_copy(struct file *file_in, loff_t pos_in, + struct file *file_out, loff_t pos_out, + size_t len) +{ + ssize_t ret = rw_verify_area(READ, file_in, &pos_in, len); + + if (ret >= 0) { + len = ret; + ret = rw_verify_area(WRITE, file_out, &pos_out, len); + if (ret >= 0) + len = ret; + } + if (ret < 0) + return ret; + + return do_splice_direct(file_in, &pos_in, file_out, &pos_out, len, 0); +} + /* * copy_file_range() differs from regular file read and write in that it * specifically allows return partial success. When it does so is up to @@ -1345,17 +1363,10 @@ ssize_t vfs_copy_file_range(struct file *file_in, loff_t pos_in, if (flags != 0) return -EINVAL; - /* copy_file_range allows full ssize_t len, ignoring MAX_RW_COUNT */ - ret = rw_verify_area(READ, file_in, &pos_in, len); - if (ret >= 0) - ret = rw_verify_area(WRITE, file_out, &pos_out, len); - if (ret < 0) - return ret; - if (!(file_in->f_mode & FMODE_READ) || !(file_out->f_mode & FMODE_WRITE) || (file_out->f_flags & O_APPEND) || - !file_out->f_op || !file_out->f_op->copy_file_range) + !file_out->f_op) return -EBADF; /* this could be relaxed once a method supports cross-fs copies */ @@ -1370,8 +1381,13 @@ ssize_t vfs_copy_file_range(struct file *file_in, loff_t pos_in, if (ret) return ret; - ret = file_out->f_op->copy_file_range(file_in, pos_in, file_out, pos_out, - len, flags); + ret = -EOPNOTSUPP; + if (file_out->f_op->copy_file_range) + ret = file_out->f_op->copy_file_range(file_in, pos_in, file_out, + pos_out, len, flags); + if (ret == -EOPNOTSUPP) + ret = vfs_copy_fr_copy(file_in, pos_in, file_out, pos_out, len); + if (ret > 0) { fsnotify_access(file_in); add_rchar(current, ret);