From patchwork Sat Apr 23 19:13:41 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Verma, Vishal L" X-Patchwork-Id: 8918691 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A15E7BF29F for ; Sat, 23 Apr 2016 19:14:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BBADD200E7 for ; Sat, 23 Apr 2016 19:14:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BEB9F200B4 for ; Sat, 23 Apr 2016 19:14:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752383AbcDWTOM (ORCPT ); Sat, 23 Apr 2016 15:14:12 -0400 Received: from mga02.intel.com ([134.134.136.20]:27616 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752367AbcDWTOK (ORCPT ); Sat, 23 Apr 2016 15:14:10 -0400 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP; 23 Apr 2016 12:14:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,523,1455004800"; d="scan'208";a="965085776" Received: from omniknight.lm.intel.com ([10.232.112.171]) by fmsmga002.fm.intel.com with ESMTP; 23 Apr 2016 12:14:06 -0700 From: Vishal Verma To: linux-nvdimm@lists.01.org Cc: Vishal Verma , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, xfs@oss.sgi.com, linux-ext4@vger.kernel.org, linux-mm@kvack.org, Matthew Wilcox , Ross Zwisler , Dan Williams , Dave Chinner , Jan Kara , Jens Axboe , Al Viro , Andrew Morton , linux-kernel@vger.kernel.org, Christoph Hellwig , Jeff Moyer , "Kirill A. Shutemov" Subject: [PATCH v3 6/7] dax: for truncate/hole-punch, do zeroing through the driver if possible Date: Sat, 23 Apr 2016 13:13:41 -0600 Message-Id: <1461438822-3592-7-git-send-email-vishal.l.verma@intel.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1461438822-3592-1-git-send-email-vishal.l.verma@intel.com> References: <1461438822-3592-1-git-send-email-vishal.l.verma@intel.com> Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In the truncate or hole-punch path in dax, we clear out sub-page ranges. If these sub-page ranges are sector aligned and sized, we can do the zeroing through the driver instead so that error-clearing is handled automatically. For sub-sector ranges, we still have to rely on clear_pmem and have the possibility of tripping over errors. Cc: Matthew Wilcox Cc: Dan Williams Cc: Ross Zwisler Cc: Jeff Moyer Cc: Christoph Hellwig Cc: Dave Chinner Cc: Jan Kara Signed-off-by: Vishal Verma --- fs/dax.c | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 5948d9b..d8c974e 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1196,6 +1196,20 @@ out: } EXPORT_SYMBOL_GPL(dax_pfn_mkwrite); +static bool dax_range_is_aligned(struct block_device *bdev, + struct blk_dax_ctl *dax, unsigned int offset, + unsigned int length) +{ + unsigned short sector_size = bdev_logical_block_size(bdev); + + if (((u64)dax->addr + offset) % sector_size) + return false; + if (length % sector_size) + return false; + + return true; +} + /** * dax_zero_page_range - zero a range within a page of a DAX file * @inode: The file being truncated @@ -1240,11 +1254,17 @@ int dax_zero_page_range(struct inode *inode, loff_t from, unsigned length, .size = PAGE_SIZE, }; - if (dax_map_atomic(bdev, &dax) < 0) - return PTR_ERR(dax.addr); - clear_pmem(dax.addr + offset, length); - wmb_pmem(); - dax_unmap_atomic(bdev, &dax); + if (dax_range_is_aligned(bdev, &dax, offset, length)) + return blkdev_issue_zeroout(bdev, dax.sector, + length / bdev_logical_block_size(bdev), + GFP_NOFS, true); + else { + if (dax_map_atomic(bdev, &dax) < 0) + return PTR_ERR(dax.addr); + clear_pmem(dax.addr + offset, length); + wmb_pmem(); + dax_unmap_atomic(bdev, &dax); + } } return 0;