From patchwork Fri Jun 4 08:14:48 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 104238 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o548Evm7026555 for ; Fri, 4 Jun 2010 08:14:58 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753583Ab0FDIOb (ORCPT ); Fri, 4 Jun 2010 04:14:31 -0400 Received: from serv2.oss.ntt.co.jp ([222.151.198.100]:54672 "EHLO serv2.oss.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753534Ab0FDIO2 (ORCPT ); Fri, 4 Jun 2010 04:14:28 -0400 Received: from serv2.oss.ntt.co.jp (localhost [127.0.0.1]) by serv2.oss.ntt.co.jp (Postfix) with ESMTP id 8C0EB24834D; Fri, 4 Jun 2010 17:14:22 +0900 (JST) Received: from serv1.oss.ntt.co.jp (serv1.oss.ntt.co.jp [172.19.0.2]) by serv2.oss.ntt.co.jp (Postfix) with ESMTP id 717FF248329; Fri, 4 Jun 2010 17:14:22 +0900 (JST) Received: from yshtky3.kern.oss.ntt.co.jp (unknown [172.17.1.82]) by serv1.oss.ntt.co.jp (Postfix) with SMTP id 4B3C411C130; Fri, 4 Jun 2010 17:14:22 +0900 (JST) Date: Fri, 4 Jun 2010 17:14:48 +0900 From: Takuya Yoshikawa To: chris.mason@oracle.com Cc: linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, takuya.yoshikawa@gmail.com Subject: [PATCH] btrfs: use zero_user family instead of writing down the same sequence repeatedly Message-Id: <20100604171448.4f1e2dbe.yoshikawa.takuya@oss.ntt.co.jp> X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.1; i486-pc-linux-gnu) Mime-Version: 1.0 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Fri, 04 Jun 2010 08:14:58 +0000 (UTC) diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 396039b..12aded9 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -512,17 +512,11 @@ static noinline int add_ra_bio_pages(struct inode *inode, free_extent_map(em); if (page->index == end_index) { - char *userpage; size_t zero_offset = isize & (PAGE_CACHE_SIZE - 1); - if (zero_offset) { - int zeros; - zeros = PAGE_CACHE_SIZE - zero_offset; - userpage = kmap_atomic(page, KM_USER0); - memset(userpage + zero_offset, 0, zeros); - flush_dcache_page(page); - kunmap_atomic(userpage, KM_USER0); - } + if (zero_offset) + zero_user_segment(page, zero_offset, + PAGE_CACHE_SIZE); } ret = bio_add_page(cb->orig_bio, page, diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index a4080c2..15dce48 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -2039,25 +2039,15 @@ static int __extent_read_full_page(struct extent_io_tree *tree, } if (page->index == last_byte >> PAGE_CACHE_SHIFT) { - char *userpage; size_t zero_offset = last_byte & (PAGE_CACHE_SIZE - 1); - if (zero_offset) { - iosize = PAGE_CACHE_SIZE - zero_offset; - userpage = kmap_atomic(page, KM_USER0); - memset(userpage + zero_offset, 0, iosize); - flush_dcache_page(page); - kunmap_atomic(userpage, KM_USER0); - } + if (zero_offset) + zero_user_segment(page, zero_offset, PAGE_CACHE_SIZE); } while (cur <= end) { if (cur >= last_byte) { - char *userpage; iosize = PAGE_CACHE_SIZE - page_offset; - userpage = kmap_atomic(page, KM_USER0); - memset(userpage + page_offset, 0, iosize); - flush_dcache_page(page); - kunmap_atomic(userpage, KM_USER0); + zero_user(page, page_offset, iosize); set_extent_uptodate(tree, cur, cur + iosize - 1, GFP_NOFS); unlock_extent(tree, cur, cur + iosize - 1, GFP_NOFS); @@ -2096,11 +2086,7 @@ static int __extent_read_full_page(struct extent_io_tree *tree, /* we've found a hole, just zero and go on */ if (block_start == EXTENT_MAP_HOLE) { - char *userpage; - userpage = kmap_atomic(page, KM_USER0); - memset(userpage + page_offset, 0, iosize); - flush_dcache_page(page); - kunmap_atomic(userpage, KM_USER0); + zero_user(page, page_offset, iosize); set_extent_uptodate(tree, cur, cur + iosize - 1, GFP_NOFS); @@ -2236,15 +2222,9 @@ static int __extent_writepage(struct page *page, struct writeback_control *wbc, return 0; } - if (page->index == end_index) { - char *userpage; + if (page->index == end_index) + zero_user_segment(page, pg_offset, PAGE_CACHE_SIZE); - userpage = kmap_atomic(page, KM_USER0); - memset(userpage + pg_offset, 0, - PAGE_CACHE_SIZE - pg_offset); - kunmap_atomic(userpage, KM_USER0); - flush_dcache_page(page); - } pg_offset = 0; set_page_extent_mapped(page); @@ -2789,16 +2769,8 @@ int extent_prepare_write(struct extent_io_tree *tree, if (!PageUptodate(page) && isnew && (block_off_end > to || block_off_start < from)) { - void *kaddr; - - kaddr = kmap_atomic(page, KM_USER0); - if (block_off_end > to) - memset(kaddr + to, 0, block_off_end - to); - if (block_off_start < from) - memset(kaddr + block_off_start, 0, - from - block_off_start); - flush_dcache_page(page); - kunmap_atomic(kaddr, KM_USER0); + zero_user_segments(page, to, block_off_end, + block_off_start, from); } if ((em->block_start != EXTENT_MAP_HOLE && em->block_start != EXTENT_MAP_INLINE) &&