From patchwork Tue Apr 12 18:43:52 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 8814461 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 7B692C0553 for ; Tue, 12 Apr 2016 18:44:11 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 695282011D for ; Tue, 12 Apr 2016 18:44:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 511AC2034F for ; Tue, 12 Apr 2016 18:44:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965468AbcDLSoH (ORCPT ); Tue, 12 Apr 2016 14:44:07 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:7999 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965353AbcDLSoE (ORCPT ); Tue, 12 Apr 2016 14:44:04 -0400 Received: from pps.filterd (m0044012.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u3CIfof3014836; Tue, 12 Apr 2016 11:44:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=facebook; bh=6wb46yO634ZluLaySE1G3ktJSDbD6KO8SsUYW4mT2t8=; b=ospGxCaBNajb8EyBGztYKRglNDr824VXVx5PfFtXbI7G25osHknZmuHdm9PH8Pe6rYm0 7FNkAUnfFYnhdQNi0R3MTiMsVxltKuAp8UvtsVqVFekQIw+7FF3cuOOCG9Bn4s2qT+FX 3ro5O5EkMNmh4OWg2mW4KvuM8OG0fZd3ij4= Received: from mail.thefacebook.com ([199.201.64.23]) by mx0a-00082601.pphosted.com with ESMTP id 2290kh1p3f-1 (version=TLSv1 cipher=AES128-SHA bits=128 verify=NOT); Tue, 12 Apr 2016 11:44:01 -0700 Received: from localhost.localdomain (192.168.54.13) by mail.thefacebook.com (192.168.16.16) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 12 Apr 2016 11:43:59 -0700 From: Jens Axboe To: , CC: , Jens Axboe Subject: [PATCH 2/3] writeback: add wbc_to_write() Date: Tue, 12 Apr 2016 12:43:52 -0600 Message-ID: <1460486633-26099-3-git-send-email-axboe@fb.com> X-Mailer: git-send-email 2.8.0.rc4.6.g7e4ba36 In-Reply-To: <1460486633-26099-1-git-send-email-axboe@fb.com> References: <1460486633-26099-1-git-send-email-axboe@fb.com> MIME-Version: 1.0 X-Originating-IP: [192.168.54.13] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-04-12_09:, , signatures=0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add wbc_to_write(), which returns the write type to use, based on a struct writeback_control. No functional changes in this patch, but it prepares us for factoring other wbc fields for write type. No intended functional changes in this patch. Signed-off-by: Jens Axboe --- fs/block_dev.c | 2 +- fs/buffer.c | 2 +- fs/f2fs/data.c | 2 +- fs/f2fs/node.c | 2 +- fs/gfs2/meta_io.c | 3 +-- fs/mpage.c | 9 ++++----- fs/xfs/xfs_aops.c | 2 +- include/linux/writeback.h | 8 ++++++++ 8 files changed, 18 insertions(+), 12 deletions(-) diff --git a/fs/block_dev.c b/fs/block_dev.c index 3172c4e2f502..b11d4e08b9a7 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -432,7 +432,7 @@ int bdev_write_page(struct block_device *bdev, sector_t sector, struct page *page, struct writeback_control *wbc) { int result; - int rw = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : WRITE; + int rw = wbc_to_write(wbc); const struct block_device_operations *ops = bdev->bd_disk->fops; if (!ops->rw_page || bdev_get_integrity(bdev)) diff --git a/fs/buffer.c b/fs/buffer.c index 33be29675358..28273caaf2b1 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1697,7 +1697,7 @@ static int __block_write_full_page(struct inode *inode, struct page *page, struct buffer_head *bh, *head; unsigned int blocksize, bbits; int nr_underway = 0; - int write_op = (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE); + int write_op = wbc_to_write(wbc); head = create_page_buffers(page, inode, (1 << BH_Dirty)|(1 << BH_Uptodate)); diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index e5c762b37239..dca5d43c67a3 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -1143,7 +1143,7 @@ static int f2fs_write_data_page(struct page *page, struct f2fs_io_info fio = { .sbi = sbi, .type = DATA, - .rw = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : WRITE, + .rw = wbc_to_write(wbc), .page = page, .encrypted_page = NULL, }; diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index 118321bd1a7f..db9201f45bf1 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -1397,7 +1397,7 @@ static int f2fs_write_node_page(struct page *page, struct f2fs_io_info fio = { .sbi = sbi, .type = NODE, - .rw = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : WRITE, + .rw = wbc_to_write(wbc), .page = page, .encrypted_page = NULL, }; diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c index e137d96f1b17..ede87306caa5 100644 --- a/fs/gfs2/meta_io.c +++ b/fs/gfs2/meta_io.c @@ -37,8 +37,7 @@ static int gfs2_aspace_writepage(struct page *page, struct writeback_control *wb { struct buffer_head *bh, *head; int nr_underway = 0; - int write_op = REQ_META | REQ_PRIO | - (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE); + int write_op = REQ_META | REQ_PRIO | wbc_to_write(wbc); BUG_ON(!PageLocked(page)); BUG_ON(!page_has_buffers(page)); diff --git a/fs/mpage.c b/fs/mpage.c index 6bd9fd90964e..9986c752f7bb 100644 --- a/fs/mpage.c +++ b/fs/mpage.c @@ -486,7 +486,6 @@ static int __mpage_writepage(struct page *page, struct writeback_control *wbc, struct buffer_head map_bh; loff_t i_size = i_size_read(inode); int ret = 0; - int wr = (wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE); if (page_has_buffers(page)) { struct buffer_head *head = page_buffers(page); @@ -595,7 +594,7 @@ page_is_mapped: * This page will go to BIO. Do we need to send this BIO off first? */ if (bio && mpd->last_block_in_bio != blocks[0] - 1) - bio = mpage_bio_submit(wr, bio); + bio = mpage_bio_submit(wbc_to_write(wbc), bio); alloc_new: if (bio == NULL) { @@ -622,7 +621,7 @@ alloc_new: wbc_account_io(wbc, page, PAGE_SIZE); length = first_unmapped << blkbits; if (bio_add_page(bio, page, length, 0) < length) { - bio = mpage_bio_submit(wr, bio); + bio = mpage_bio_submit(wbc_to_write(wbc), bio); goto alloc_new; } @@ -632,7 +631,7 @@ alloc_new: set_page_writeback(page); unlock_page(page); if (boundary || (first_unmapped != blocks_per_page)) { - bio = mpage_bio_submit(wr, bio); + bio = mpage_bio_submit(wbc_to_write(wbc), bio); if (boundary_block) { write_boundary_block(boundary_bdev, boundary_block, 1 << blkbits); @@ -644,7 +643,7 @@ alloc_new: confused: if (bio) - bio = mpage_bio_submit(wr, bio); + bio = mpage_bio_submit(wbc_to_write(wbc), bio); if (mpd->use_writepage) { ret = mapping->a_ops->writepage(page, wbc); diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index d445a64b979e..239a612ea1d6 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -393,7 +393,7 @@ xfs_submit_ioend_bio( atomic_inc(&ioend->io_remaining); bio->bi_private = ioend; bio->bi_end_io = xfs_end_bio; - submit_bio(wbc->sync_mode == WB_SYNC_ALL ? WRITE_SYNC : WRITE, bio); + submit_bio(wbc_to_write(wbc), bio); } STATIC struct bio * diff --git a/include/linux/writeback.h b/include/linux/writeback.h index d0b5ca5d4e08..719c255e105a 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -100,6 +100,14 @@ struct writeback_control { #endif }; +static inline int wbc_to_write(struct writeback_control *wbc) +{ + if (wbc->sync_mode == WB_SYNC_ALL) + return WRITE_SYNC; + + return WRITE; +} + /* * A wb_domain represents a domain that wb's (bdi_writeback's) belong to * and are measured against each other in. There always is one global