From patchwork Sat Jan 29 19:26:26 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 516901 X-Patchwork-Delegate: ericvh@gmail.com Received: from lists.sourceforge.net (lists.sourceforge.net [216.34.181.88]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p0TJREEo020214 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sat, 29 Jan 2011 19:27:35 GMT Received: from localhost ([127.0.0.1] helo=sfs-ml-1.v29.ch3.sourceforge.com) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.74) (envelope-from ) id 1PjGRm-0002JC-3Y; Sat, 29 Jan 2011 19:26:58 +0000 Received: from sog-mx-4.v43.ch3.sourceforge.com ([172.29.43.194] helo=mx.sourceforge.net) by sfs-ml-1.v29.ch3.sourceforge.com with esmtp (Exim 4.74) (envelope-from ) id 1PjGRl-0002J4-5T for v9fs-developer@lists.sourceforge.net; Sat, 29 Jan 2011 19:26:57 +0000 X-ACL-Warn: Received: from e23smtp07.au.ibm.com ([202.81.31.140]) by sog-mx-4.v43.ch3.sourceforge.com with esmtps (TLSv1:AES256-SHA:256) (Exim 4.74) id 1PjGRj-00032e-MX for v9fs-developer@lists.sourceforge.net; Sat, 29 Jan 2011 19:26:57 +0000 Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [202.81.31.245]) by e23smtp07.au.ibm.com (8.14.4/8.13.1) with ESMTP id p0TJQmph003182 for ; Sun, 30 Jan 2011 06:26:48 +1100 Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p0TJQmaN2080798 for ; Sun, 30 Jan 2011 06:26:48 +1100 Received: from d23av03.au.ibm.com (loopback [127.0.0.1]) by d23av03.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p0TJQm2H004922 for ; Sun, 30 Jan 2011 06:26:48 +1100 Received: from skywalker.in.ibm.com ([9.77.192.95]) by d23av03.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p0TJQWTF004824; Sun, 30 Jan 2011 06:26:46 +1100 From: "Aneesh Kumar K.V" To: v9fs-developer@lists.sourceforge.net Date: Sun, 30 Jan 2011 00:56:26 +0530 Message-Id: <1296329186-23807-8-git-send-email-aneesh.kumar@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1296329186-23807-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> References: <1296329186-23807-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> X-Spam-Score: -0.0 (/) X-Spam-Report: Spam Filtering performed by mx.sourceforge.net. See http://spamassassin.org/tag/ for more details. -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain X-Headers-End: 1PjGRj-00032e-MX Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [V9fs-developer] [RFC PATCH -V1 7/7] fs/9p: Add buffered write support for v9fs. We can now support writeable mmaps. X-BeenThere: v9fs-developer@lists.sourceforge.net X-Mailman-Version: 2.1.9 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: v9fs-developer-bounces@lists.sourceforge.net X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Sat, 29 Jan 2011 19:27:36 +0000 (UTC) diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 005024c..792c68a 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -39,16 +39,16 @@ #include "v9fs.h" #include "v9fs_vfs.h" #include "cache.h" +#include "fid.h" /** - * v9fs_vfs_readpage - read an entire page in from 9P + * v9fs_fid_readpage - read an entire page in from 9P * - * @filp: file being read + * @fid: fid being read * @page: structure to page * */ - -static int v9fs_vfs_readpage(struct file *filp, struct page *page) +static int v9fs_fid_readpage(struct p9_fid *fid, struct page *page) { int retval; loff_t offset; @@ -67,7 +67,7 @@ static int v9fs_vfs_readpage(struct file *filp, struct page *page) buffer = kmap(page); offset = page_offset(page); - retval = v9fs_file_readn(filp, buffer, NULL, PAGE_CACHE_SIZE, offset); + retval = v9fs_fid_readn(fid, buffer, NULL, PAGE_CACHE_SIZE, offset); if (retval < 0) { v9fs_uncache_page(inode, page); goto done; @@ -87,6 +87,19 @@ done: } /** + * v9fs_vfs_readpage - read an entire page in from 9P + * + * @filp: file being read + * @page: structure to page + * + */ + +static int v9fs_vfs_readpage(struct file *filp, struct page *page) +{ + return v9fs_fid_readpage(filp->private_data, page); +} + +/** * v9fs_vfs_readpages - read a set of pages from 9P * * @filp: file being read @@ -124,7 +137,6 @@ static int v9fs_release_page(struct page *page, gfp_t gfp) { if (PagePrivate(page)) return 0; - return v9fs_fscache_release_page(page, gfp); } @@ -137,21 +149,86 @@ static int v9fs_release_page(struct page *page, gfp_t gfp) static void v9fs_invalidate_page(struct page *page, unsigned long offset) { + /* + * If called with zero offset, we should release + * the private state assocated with the page + */ if (offset == 0) v9fs_fscache_invalidate_page(page); } +static int v9fs_vfs_writepage_locked(struct page *page) +{ + char *buffer; + int retval, len; + loff_t offset, size; + mm_segment_t old_fs; + struct inode *inode = page->mapping->host; + + size = i_size_read(inode); + if (page->index == size >> PAGE_CACHE_SHIFT) + len = size & ~PAGE_CACHE_MASK; + else + len = PAGE_CACHE_SIZE; + + set_page_writeback(page); + + buffer = kmap(page); + offset = page_offset(page); + + old_fs = get_fs(); + set_fs(get_ds()); + /* We should have i_private always set */ + BUG_ON(!inode->i_private); + + retval = v9fs_file_write_internal(inode, + (struct p9_fid *)inode->i_private, + buffer, len, &offset, 0); + if (retval > 0) + retval = 0; + + set_fs(old_fs); + kunmap(page); + end_page_writeback(page); + return retval; +} + +static int v9fs_vfs_writepage(struct page *page, struct writeback_control *wbc) +{ + int retval; + + retval = v9fs_vfs_writepage_locked(page); + if (retval < 0) { + if (retval == -EAGAIN) { + redirty_page_for_writepage(wbc, page); + retval = 0; + } else { + SetPageError(page); + mapping_set_error(page->mapping, retval); + } + } else + retval = 0; + + unlock_page(page); + return retval; +} + /** * v9fs_launder_page - Writeback a dirty page - * Since the writes go directly to the server, we simply return a 0 - * here to indicate success. - * * Returns 0 on success. */ static int v9fs_launder_page(struct page *page) { + int retval; + struct inode *inode = page->mapping->host; + v9fs_fscache_wait_on_page_write(inode, page); + if (clear_page_dirty_for_io(page)) { + retval = v9fs_vfs_writepage_locked(page); + if (retval) + return retval; + } return 0; } @@ -177,6 +254,11 @@ static int v9fs_launder_page(struct page *page) ssize_t v9fs_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov, loff_t pos, unsigned long nr_segs) { + /* + * FIXME + * Now that we do caching with cache mode enabled, We need + * to support direct IO + */ P9_DPRINTK(P9_DEBUG_VFS, "v9fs_direct_IO: v9fs_direct_IO (%s) " "off/no(%lld/%lu) EINVAL\n", iocb->ki_filp->f_path.dentry->d_name.name, @@ -184,11 +266,81 @@ ssize_t v9fs_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov, return -EINVAL; } + +static int v9fs_write_begin(struct file *filp, struct address_space *mapping, + loff_t pos, unsigned len, unsigned flags, + struct page **pagep, void **fsdata) +{ + int retval = 0; + struct page *page; + pgoff_t index = pos >> PAGE_CACHE_SHIFT; + struct inode *inode = mapping->host; + +start: + page = grab_cache_page_write_begin(mapping, index, flags); + if (!page) { + retval = -ENOMEM; + goto out; + } + BUG_ON(!inode->i_private); + if (PageUptodate(page)) + goto out; + + if (len == PAGE_CACHE_SIZE) + goto out; + + retval = v9fs_fid_readpage(inode->i_private, page); + page_cache_release(page); + if (!retval) + goto start; +out: + *pagep = page; + return retval; +} + +static int v9fs_write_end(struct file *filp, struct address_space *mapping, + loff_t pos, unsigned len, unsigned copied, + struct page *page, void *fsdata) +{ + loff_t last_pos = pos + copied; + struct inode *inode = page->mapping->host; + + if (unlikely(copied < len)) { + /* + * zero out the rest of the area + */ + unsigned from = pos & (PAGE_CACHE_SIZE - 1); + + zero_user(page, from + copied, len - copied); + flush_dcache_page(page); + } + + if (!PageUptodate(page)) + SetPageUptodate(page); + /* + * No need to use i_size_read() here, the i_size + * cannot change under us because we hold the i_mutex. + */ + if (last_pos > inode->i_size) + i_size_write(inode, last_pos); + + set_page_dirty(page); + unlock_page(page); + page_cache_release(page); + + return copied; +} + + const struct address_space_operations v9fs_addr_operations = { - .readpage = v9fs_vfs_readpage, - .readpages = v9fs_vfs_readpages, - .releasepage = v9fs_release_page, - .invalidatepage = v9fs_invalidate_page, - .launder_page = v9fs_launder_page, - .direct_IO = v9fs_direct_IO, + .readpage = v9fs_vfs_readpage, + .readpages = v9fs_vfs_readpages, + .set_page_dirty = __set_page_dirty_nobuffers, + .writepage = v9fs_vfs_writepage, + .write_begin = v9fs_write_begin, + .write_end = v9fs_write_end, + .releasepage = v9fs_release_page, + .invalidatepage = v9fs_invalidate_page, + .launder_page = v9fs_launder_page, + .direct_IO = v9fs_direct_IO, }; diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c index 0f62b26..6e38947 100644 --- a/fs/9p/vfs_file.c +++ b/fs/9p/vfs_file.c @@ -44,6 +44,8 @@ #include "fid.h" #include "cache.h" +static const struct vm_operations_struct v9fs_file_vm_ops; + /** * v9fs_file_open - open a file (or directory) * @inode: inode to be opened @@ -515,6 +517,7 @@ out: return retval; } + static int v9fs_file_fsync(struct file *filp, int datasync) { struct p9_fid *fid; @@ -544,28 +547,71 @@ int v9fs_file_fsync_dotl(struct file *filp, int datasync) return retval; } +static int +v9fs_file_mmap(struct file *file, struct vm_area_struct *vma) +{ + int retval; + + retval = generic_file_mmap(file, vma); + if (!retval) + vma->vm_ops = &v9fs_file_vm_ops; + + return retval; +} + +static int +v9fs_vm_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf) +{ + struct page *page = vmf->page; + struct file *filp = vma->vm_file; + struct inode *inode = filp->f_path.dentry->d_inode; + + + P9_DPRINTK(P9_DEBUG_VFS, "page %p fid %lx\n", + page, (unsigned long)filp->private_data); + + /* make sure the cache has finished storing the page */ + v9fs_fscache_wait_on_page_write(inode, page); + BUG_ON(!inode->i_private); + lock_page(page); + if (page->mapping != inode->i_mapping) + goto out_unlock; + + return VM_FAULT_LOCKED; +out_unlock: + unlock_page(page); + return VM_FAULT_NOPAGE; +} + +static const struct vm_operations_struct v9fs_file_vm_ops = { + .fault = filemap_fault, + .page_mkwrite = v9fs_vm_page_mkwrite, +}; + const struct file_operations v9fs_cached_file_operations = { .llseek = generic_file_llseek, .read = do_sync_read, + .write = do_sync_write, .aio_read = generic_file_aio_read, - .write = v9fs_file_write, + .aio_write = generic_file_aio_write, .open = v9fs_file_open, .release = v9fs_dir_release, .lock = v9fs_file_lock, - .mmap = generic_file_readonly_mmap, + .mmap = v9fs_file_mmap, .fsync = v9fs_file_fsync, }; const struct file_operations v9fs_cached_file_operations_dotl = { .llseek = generic_file_llseek, .read = do_sync_read, + .write = do_sync_write, .aio_read = generic_file_aio_read, - .write = v9fs_file_write, + .aio_write = generic_file_aio_write, .open = v9fs_file_open, .release = v9fs_dir_release, .lock = v9fs_file_lock_dotl, .flock = v9fs_file_flock_dotl, - .mmap = generic_file_readonly_mmap, + .mmap = v9fs_file_mmap, .fsync = v9fs_file_fsync_dotl, };