From patchwork Wed May 16 03:52:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: robbieko X-Patchwork-Id: 10402489 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 848A5602C2 for ; Wed, 16 May 2018 03:52:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 637D9285F9 for ; Wed, 16 May 2018 03:52:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 559B628731; Wed, 16 May 2018 03:52:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B7B4285F9 for ; Wed, 16 May 2018 03:52:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751233AbeEPDwx (ORCPT ); Tue, 15 May 2018 23:52:53 -0400 Received: from synology.com ([59.124.61.242]:55876 "EHLO synology.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750810AbeEPDww (ORCPT ); Tue, 15 May 2018 23:52:52 -0400 Received: from localhost.localdomain (unknown [10.13.20.241]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by synology.com (Postfix) with ESMTPSA id D95FF1E5E384A; Wed, 16 May 2018 11:52:50 +0800 (CST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=synology.com; s=123; t=1526442770; bh=qIjhKkrhYU8qthDsAVzEDFVNUbV5VCxVn5fYkrb5XWI=; h=From:To:Cc:Subject:Date; b=pxQntzJVCZOUclmwXgZemD0hl0wXAJ7MIoWjx5WYho0clnI4rChs+tP0NeYDkHWs5 mHGoDfxnh7TGANVicB0MGueP+apppXx+AWjI8TgojweBmnURIS1SJy53Hq7t1GFks3 44Ne2Pqb5e2QCpCxi+q83bZINk69AYEX8CPZOqDI= From: robbieko To: linux-btrfs@vger.kernel.org Cc: Robbie Ko Subject: [PATCH] Btrfs: implement unlocked buffered write Date: Wed, 16 May 2018 11:52:37 +0800 Message-Id: <1526442757-7167-1-git-send-email-robbieko@synology.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Synology-MCP-Status: no X-Synology-Spam-Flag: no X-Synology-Spam-Status: score=0, required 5, WHITELIST_FROM_ADDRESS 0 X-Synology-Virus-Status: no Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Robbie Ko This idea is from direct io. By this patch, we can make the buffered write parallel, and improve the performance and latency. But because we can not update isize without i_mutex, the unlocked buffered write just can be done in front of the EOF. We needn't worry about the race between buffered write and truncate, because the truncate need wait until all the buffered write end. And we also needn't worry about the race between dio write and punch hole, because we have extent lock to protect our operation. I ran fio to test the performance of this feature. == Hardware == CPU: Intel® Xeon® D-1531 SSD: Intel S3710 200G Volume : RAID 5 , SSD * 6 == config file == [global] group_reporting time_based thread=1 norandommap ioengine=libaio bs=4k iodepth=32 size=16G runtime=180 numjobs=8 rw=randwrite [file1] filename=/mnt/btrfs/nocow/testfile == result (iops) == lock = 68470 unlocked = 94242 == result (clat) == lock lat (usec): min=184, max=1209.9K, avg=3738.35, stdev=20869.49 clat percentiles (usec): | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 346], | 30.00th=[ 370], 40.00th=[ 386], 50.00th=[ 406], 60.00th=[ 446], | 70.00th=[ 516], 80.00th=[ 612], 90.00th=[ 828], 95.00th=[10432], | 99.00th=[84480], 99.50th=[117248], 99.90th=[226304], 99.95th=[333824], | 99.99th=[692224] unlocked lat (usec): min=10, max=218208, avg=2691.44, stdev=5380.82 clat percentiles (usec): | 1.00th=[ 302], 5.00th=[ 390], 10.00th=[ 442], 20.00th=[ 502], | 30.00th=[ 548], 40.00th=[ 596], 50.00th=[ 652], 60.00th=[ 724], | 70.00th=[ 916], 80.00th=[ 5024], 90.00th=[ 5664], 95.00th=[10048], | 99.00th=[29568], 99.50th=[39168], 99.90th=[54016], 99.95th=[59648], | 99.99th=[78336] Signed-off-by: Robbie Ko --- fs/btrfs/file.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c index 41ab907..8eac540 100644 --- a/fs/btrfs/file.c +++ b/fs/btrfs/file.c @@ -1600,6 +1600,7 @@ static noinline ssize_t __btrfs_buffered_write(struct file *file, int ret = 0; bool only_release_metadata = false; bool force_page_uptodate = false; + bool relock = false; nrptrs = min(DIV_ROUND_UP(iov_iter_count(i), PAGE_SIZE), PAGE_SIZE / (sizeof(struct page *))); @@ -1609,6 +1610,18 @@ static noinline ssize_t __btrfs_buffered_write(struct file *file, if (!pages) return -ENOMEM; + inode_dio_begin(inode); + + /* + * If the write is beyond the EOF, we need update + * the isize, but it is protected by i_mutex. So we can + * not unlock the i_mutex at this case. + */ + if (pos + iov_iter_count(i) <= i_size_read(inode)) { + inode_unlock(inode); + relock = true; + } + while (iov_iter_count(i) > 0) { size_t offset = pos & (PAGE_SIZE - 1); size_t sector_offset; @@ -1808,6 +1821,10 @@ static noinline ssize_t __btrfs_buffered_write(struct file *file, } } + inode_dio_end(inode); + if (relock) + inode_lock(inode); + extent_changeset_free(data_reserved); return num_written ? num_written : ret; }