From patchwork Thu Sep 6 16:06:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luis Henriques X-Patchwork-Id: 10590833 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E4F75A4 for ; Thu, 6 Sep 2018 16:05:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D7EE2AEB1 for ; Thu, 6 Sep 2018 16:05:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 714D32AEB4; Thu, 6 Sep 2018 16:05:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0707E2AEB1 for ; Thu, 6 Sep 2018 16:05:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726295AbeIFUli (ORCPT ); Thu, 6 Sep 2018 16:41:38 -0400 Received: from mx2.suse.de ([195.135.220.15]:42058 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725819AbeIFUli (ORCPT ); Thu, 6 Sep 2018 16:41:38 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id B8A5AACD1; Thu, 6 Sep 2018 16:05:25 +0000 (UTC) From: Luis Henriques To: "Yan, Zheng" , Sage Weil , Ilya Dryomov , Gregory Farnum Cc: ceph-devel@vger.kernel.org, Luis Henriques Subject: [PATCH v3 0/3] copy_file_range in cephfs kernel client Date: Thu, 6 Sep 2018 17:06:17 +0100 Message-Id: <20180906160620.16277-1-lhenriques@suse.com> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Changes since v2: - Files size checks are now done after we have all the required caps Here's the main changes since v1, after Zheng's review: 1. ceph_osdc_copy_from() now receives source and destination snapids instead of ceph_vino structs 2. Also get FILE_RD capabilities in ceph_copy_file_range() for source file as other clients may have dirty data in their cache. 3. Fallback to VFS copy_file_range default implementation if we're copying beyond source file EOF Note that 2. required an extra patch modifying ceph_try_get_caps() so that it could perform a non-blocking attempt at getting CEPH_CAP_FILE_RD capabilities. And here's the original (v1) RFC cover letter just for reference: This series is my initial attempt at getting a copy_file_range syscall implementation in the kernel cephfs client using the 'copy-from' RADOS operation. The idea of getting this implemented was from Greg -- or, at least, he created a feature in the tracker [1]. I just decided to give it a try as the feature wasn't assigned to anyone ;-) I have this patchset sitting on my laptop for a while already, waiting for me to revisit it, review some of its TODOs... but I finally decided to send it out as-is instead, to get some early feedback. The first patch implements the copy-from operation in the libceph module. Unfortunately, the documentation for this operation is nonexistent and I had to do a lot of digging to figure out the details (and I probably I missed something!). For example, initially I was hoping that this operation could be used to copy more than one object at the time. Doing an OSD request per object copy is not ideal, but unfortunately it seems to be the only way. Anyway, my expectations are that this new operation will be useful for other features in the future. The 2nd patch is where the copy_file_range is implemented and could probably be optimised, but I didn't bother with that for now. The important bit is that we still may need to do some manual copies if the offsets aren't object aligned or if the length is smaller than the object size. I'm using do_splice_direct() for the manual copies as it was the easiest way to get a PoC running, but maybe there are better ways. I've done some functional testing on this PoC. And it also passes the generic xfstest suite, in particular the copy_file_range specific tests (430-434). But I haven't done any benchmarks to measure any performance changes in using this syscall. Any feedback is welcome, specially regarding the TODOs on the code. [1] https://tracker.ceph.com/issues/21944 Luis Henriques (3): ceph: add non-blocking parameter to ceph_try_get_caps() ceph: support the RADOS copy-from operation ceph: support copy_file_range file operation fs/ceph/addr.c | 2 +- fs/ceph/caps.c | 7 +- fs/ceph/file.c | 221 ++++++++++++++++++++++++++++++++ fs/ceph/super.h | 2 +- include/linux/ceph/osd_client.h | 17 +++ include/linux/ceph/rados.h | 19 +++ net/ceph/osd_client.c | 72 +++++++++++ 7 files changed, 335 insertions(+), 5 deletions(-)