From patchwork Tue Mar 31 11:52:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 6129271 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 43949BF4A6 for ; Tue, 31 Mar 2015 11:52:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5F4A72018E for ; Tue, 31 Mar 2015 11:52:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3D3452017D for ; Tue, 31 Mar 2015 11:52:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751634AbbCaLwl (ORCPT ); Tue, 31 Mar 2015 07:52:41 -0400 Received: from mail-pa0-f46.google.com ([209.85.220.46]:33711 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751029AbbCaLwj (ORCPT ); Tue, 31 Mar 2015 07:52:39 -0400 Received: by pacgg7 with SMTP id gg7so17716189pac.0; Tue, 31 Mar 2015 04:52:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=from:to:cc:subject:date:message-id; bh=nIF1lFhvj9guubK0ACB1/YKDBH4Ig1c7PBavulsyK+I=; b=d3nQ0sGhaIR4xwHRuI7ObZ/zALi/DbURGdhFLHTRI1CHLCQpag2NosIQvolK20uUB2 LeviK2P6tzqVArnPdEUUY0BV80NpzBm942dtyTl8McHtEeaeze0TN6Wwiel5QjNPhQKu UbTwK8OQhKcoFaboErnQKlQWkUDuy5ZuPawqAHlLD0BhOOL6Oi1D5QAk1OEDu6W1RfxP Pcfu6ATSCoyWuPG0MknJbGpFAiqH9gnsutqlNYBbp6f7p5O7rAcdleXveWNjIYuhMmf2 Xh0tGnWjsZ2EFnYU5eIklP3FvLvMcsUs2SOtWeCA52orexwN4xDoZJGVne3oX7DyEbzB GZcQ== X-Received: by 10.69.14.193 with SMTP id fi1mr36349360pbd.154.1427802759408; Tue, 31 Mar 2015 04:52:39 -0700 (PDT) Received: from localhost (li817-72.members.linode.com. [106.185.55.72]) by mx.google.com with ESMTPSA id qh9sm10744098pbc.24.2015.03.31.04.52.38 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Tue, 31 Mar 2015 04:52:38 -0700 (PDT) From: Ming Lei To: Alexander Viro , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Ming Lei Subject: [PATCH] fs: direct-io: increase bio refcount as batch Date: Tue, 31 Mar 2015 19:52:26 +0800 Message-Id: <1427802746-30432-1-git-send-email-ming.lei@canonical.com> X-Mailer: git-send-email 1.7.9.5 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Each bio is always submitted to block device one by one, so it isn't necessary to increase the bio refcount by one each time with holding dio->bio_lock. Signed-off-by: Ming Lei --- fs/direct-io.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/fs/direct-io.c b/fs/direct-io.c index 6fb00e3..57b8e73 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -79,6 +79,8 @@ struct dio_submit { get_block_t *get_block; /* block mapping function */ dio_submit_t *submit_io; /* IO submition function */ + long submitted_bio; + loff_t logical_offset_in_bio; /* current first logical block in bio */ sector_t final_block_in_bio; /* current final block in bio + 1 */ sector_t next_block_for_io; /* next block to be put under IO, @@ -121,7 +123,7 @@ struct dio { int is_async; /* is IO async ? */ bool defer_completion; /* defer AIO completion to workqueue? */ int io_error; /* IO error in completion path */ - unsigned long refcount; /* direct_io_worker() and bios */ + long refcount; /* direct_io_worker() and bios */ struct bio *bio_list; /* singly linked via bi_private */ struct task_struct *waiter; /* waiting task (NULL if none) */ @@ -383,14 +385,9 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio, static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio) { struct bio *bio = sdio->bio; - unsigned long flags; bio->bi_private = dio; - spin_lock_irqsave(&dio->bio_lock, flags); - dio->refcount++; - spin_unlock_irqrestore(&dio->bio_lock, flags); - if (dio->is_async && dio->rw == READ) bio_set_pages_dirty(bio); @@ -403,15 +400,26 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio) sdio->bio = NULL; sdio->boundary = 0; sdio->logical_offset_in_bio = 0; + sdio->submitted_bio++; } /* * Release any resources in case of a failure */ -static inline void dio_cleanup(struct dio *dio, struct dio_submit *sdio) +static inline void dio_cleanup(struct dio *dio, struct dio_submit *sdio, + bool commit_refcount) { + unsigned long flags; + while (sdio->head < sdio->tail) page_cache_release(dio->pages[sdio->head++]); + + if (!commit_refcount) + return; + + spin_lock_irqsave(&dio->bio_lock, flags); + dio->refcount += (sdio->submitted_bio + 1); + spin_unlock_irqrestore(&dio->bio_lock, flags); } /* @@ -1215,7 +1223,6 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode, dio->i_size = i_size_read(inode); spin_lock_init(&dio->bio_lock); - dio->refcount = 1; sdio.iter = iter; sdio.final_block_in_request = @@ -1234,7 +1241,7 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode, retval = do_direct_IO(dio, &sdio, &map_bh); if (retval) - dio_cleanup(dio, &sdio); + dio_cleanup(dio, &sdio, false); if (retval == -ENOTBLK) { /* @@ -1267,7 +1274,7 @@ do_blockdev_direct_IO(int rw, struct kiocb *iocb, struct inode *inode, * It is possible that, we return short IO due to end of file. * In that case, we need to release all the pages we got hold on. */ - dio_cleanup(dio, &sdio); + dio_cleanup(dio, &sdio, true); /* * All block lookups have been performed. For READ requests