From patchwork Wed Sep 14 08:37:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Miklos Szeredi X-Patchwork-Id: 9330903 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DAE996077F for ; Wed, 14 Sep 2016 08:39:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB00B29B6F for ; Wed, 14 Sep 2016 08:39:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BF72929B71; Wed, 14 Sep 2016 08:39:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.4 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0A85629B6F for ; Wed, 14 Sep 2016 08:39:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1761597AbcINIi4 (ORCPT ); Wed, 14 Sep 2016 04:38:56 -0400 Received: from mail-wm0-f48.google.com ([74.125.82.48]:33339 "EHLO mail-wm0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761163AbcINIhc (ORCPT ); Wed, 14 Sep 2016 04:37:32 -0400 Received: by mail-wm0-f48.google.com with SMTP id b187so4520050wme.0 for ; Wed, 14 Sep 2016 01:37:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=I5hv+lzXvg1QtfAo/kGn4xlgaO9WP3CO4JaF4JUuXRc=; b=cg467bURS3H1Svu1qSaAjCN71e2Zf03aQkQ1pEZkzvnS+Em3WJtvEfcw724IQ4IHnv IFoy/WA2YEvzMHlsBFj+7I/eoAqhPKajq158KaNX0DHTlE7RhSdDj9/v+qJh6kg5SJ2p tOxiZR98iah1X8sjaTWYU0ws/n87L9NPh/NFZa7fKpWGPFJq24YwlWXTGHIS6YWESkXh 70vsmmA9WO812qJ8HKDOVBo4Hch8wxQQxNB+kNO+O6b1rBWGjXAPoYYeo/ja5YR8uRhh eBV7BvCcEkppyzvBios1shn7tGRjdrR44Lm4qDOmH6n7x8pRIHttM9bt3R+pkTR5KdKW rY8Q== X-Gm-Message-State: AE9vXwPhsVmhURgecP8wkuAphw4PLiJOFb6gaQHlRPhhGLsJ8YTpsdG/p15S0LB6eN8Gx56z X-Received: by 10.28.147.69 with SMTP id v66mr1765228wmd.70.1473842250434; Wed, 14 Sep 2016 01:37:30 -0700 (PDT) Received: from veci.piliscsaba.szeredi.hu (pool-dsl-2c-0018.externet.hu. [217.173.44.24]) by smtp.gmail.com with ESMTPSA id g184sm9533889wme.15.2016.09.14.01.37.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Sep 2016 01:37:29 -0700 (PDT) From: Miklos Szeredi To: linux-fsdevel@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Al Viro Subject: [PATCH 08/11] filemap: add get_page_for_read() helper Date: Wed, 14 Sep 2016 10:37:13 +0200 Message-Id: <1473842236-28655-9-git-send-email-mszeredi@redhat.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1473842236-28655-1-git-send-email-mszeredi@redhat.com> References: <1473842236-28655-1-git-send-email-mszeredi@redhat.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Getting the page for reading is pretty complicated. This functionality is also duplicated between generic_file_read() generic_file_splice_read(). So first extract the common code from do_generic_file_read() into a separate helper function that takes a filp, the page index, the offset into the page and the byte count and returns the page ready for reading (or an error). This makes do_generic_file_read() much easier to read as well. Signed-off-by: Miklos Szeredi --- include/linux/pagemap.h | 3 + mm/filemap.c | 339 +++++++++++++++++++++++++----------------------- 2 files changed, 177 insertions(+), 165 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 66a1260b33de..f0369ded606c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -653,4 +653,7 @@ static inline unsigned long dir_pages(struct inode *inode) PAGE_SHIFT; } +int get_page_for_read(struct file *filp, unsigned long offset, size_t count, + pgoff_t index, struct page **pagep); + #endif /* _LINUX_PAGEMAP_H */ diff --git a/mm/filemap.c b/mm/filemap.c index 8a287dfc5372..288e0ee803b8 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1648,6 +1648,170 @@ static void shrink_readahead_size_eio(struct file *filp, ra->ra_pages /= 4; } +int get_page_for_read(struct file *filp, unsigned long offset, size_t count, + pgoff_t index, struct page **pagep) +{ + struct address_space *mapping = filp->f_mapping; + struct inode *inode = mapping->host; + struct file_ra_state *ra = &filp->f_ra; + int error = 0; + struct page *page; + pgoff_t end_index; + loff_t isize; + unsigned long nr; + pgoff_t pg_count = (offset + count + PAGE_SIZE - 1) >> PAGE_SHIFT; + +find_page: + page = find_get_page(mapping, index); + if (!page) { + page_cache_sync_readahead(mapping, ra, filp, index, pg_count); + page = find_get_page(mapping, index); + if (unlikely(page == NULL)) + goto no_cached_page; + } + if (PageReadahead(page)) { + page_cache_async_readahead(mapping, ra, filp, page, + index, pg_count); + } + if (!PageUptodate(page)) { + /* + * See comment in do_read_cache_page on why + * wait_on_page_locked is used to avoid unnecessarily + * serialisations and why it's safe. + */ + wait_on_page_locked_killable(page); + if (PageUptodate(page)) + goto page_ok; + + if (inode->i_blkbits == PAGE_SHIFT || + !mapping->a_ops->is_partially_uptodate) + goto page_not_up_to_date; + if (!trylock_page(page)) + goto page_not_up_to_date; + /* Did it get truncated before we got the lock? */ + if (!page->mapping) + goto page_not_up_to_date_locked; + if (!mapping->a_ops->is_partially_uptodate(page, offset, count)) + goto page_not_up_to_date_locked; + unlock_page(page); + } +page_ok: + /* + * i_size must be checked after we know the page is Uptodate. + * + * Checking i_size after the check allows us to calculate + * the correct value for "nr", which means the zero-filled + * part of the page is not copied back to userspace (unless + * another truncate extends the file - this is desired though). + */ + + isize = i_size_read(inode); + end_index = (isize - 1) >> PAGE_SHIFT; + if (unlikely(!isize || index > end_index)) + goto out_put_page; + + /* nr is the maximum number of bytes to copy from this page */ + nr = PAGE_SIZE; + if (index == end_index) { + nr = ((isize - 1) & ~PAGE_MASK) + 1; + if (nr <= offset) + goto out_put_page; + } + nr = nr - offset; + + *pagep = page; + return nr; + +page_not_up_to_date: + /* Get exclusive access to the page ... */ + error = lock_page_killable(page); + if (unlikely(error)) + goto out_put_page; + +page_not_up_to_date_locked: + /* Did it get truncated before we got the lock? */ + if (!page->mapping) { + unlock_page(page); + put_page(page); + cond_resched(); + goto find_page; + } + + /* Did somebody else fill it already? */ + if (PageUptodate(page)) { + unlock_page(page); + goto page_ok; + } + +readpage: + /* + * A previous I/O error may have been due to temporary + * failures, eg. multipath errors. + * PG_error will be set again if readpage fails. + */ + ClearPageError(page); + /* Start the actual read. The read will unlock the page. */ + error = mapping->a_ops->readpage(filp, page); + + if (unlikely(error)) { + if (error == AOP_TRUNCATED_PAGE) { + put_page(page); + error = 0; + goto find_page; + } + goto out_put_page; + } + + if (!PageUptodate(page)) { + error = lock_page_killable(page); + if (unlikely(error)) + goto out_put_page; + if (!PageUptodate(page)) { + if (page->mapping == NULL) { + /* + * invalidate_mapping_pages got it + */ + unlock_page(page); + put_page(page); + goto find_page; + } + unlock_page(page); + shrink_readahead_size_eio(filp, ra); + error = -EIO; + goto out_put_page; + } + unlock_page(page); + } + goto page_ok; + +no_cached_page: + /* + * Ok, it wasn't cached, so we need to create a new + * page.. + */ + page = page_cache_alloc_cold(mapping); + if (!page) { + error = -ENOMEM; + goto out; + } + error = add_to_page_cache_lru(page, mapping, index, + mapping_gfp_constraint(mapping, GFP_KERNEL)); + if (!error) + goto readpage; + + if (error == -EEXIST) { + put_page(page); + error = 0; + goto find_page; + } + +out_put_page: + put_page(page); +out: + return error; + +} + /** * do_generic_file_read - generic file read routine * @filp: the file to read @@ -1664,11 +1828,8 @@ static void shrink_readahead_size_eio(struct file *filp, static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos, struct iov_iter *iter, ssize_t written) { - struct address_space *mapping = filp->f_mapping; - struct inode *inode = mapping->host; struct file_ra_state *ra = &filp->f_ra; pgoff_t index; - pgoff_t last_index; pgoff_t prev_index; unsigned long offset; /* offset into pagecache page */ unsigned int prev_offset; @@ -1677,87 +1838,26 @@ static ssize_t do_generic_file_read(struct file *filp, loff_t *ppos, index = *ppos >> PAGE_SHIFT; prev_index = ra->prev_pos >> PAGE_SHIFT; prev_offset = ra->prev_pos & (PAGE_SIZE-1); - last_index = (*ppos + iter->count + PAGE_SIZE-1) >> PAGE_SHIFT; offset = *ppos & ~PAGE_MASK; for (;;) { struct page *page; - pgoff_t end_index; - loff_t isize; - unsigned long nr, ret; + long nr, ret; cond_resched(); -find_page: - page = find_get_page(mapping, index); - if (!page) { - page_cache_sync_readahead(mapping, - ra, filp, - index, last_index - index); - page = find_get_page(mapping, index); - if (unlikely(page == NULL)) - goto no_cached_page; - } - if (PageReadahead(page)) { - page_cache_async_readahead(mapping, - ra, filp, page, - index, last_index - index); - } - if (!PageUptodate(page)) { - /* - * See comment in do_read_cache_page on why - * wait_on_page_locked is used to avoid unnecessarily - * serialisations and why it's safe. - */ - wait_on_page_locked_killable(page); - if (PageUptodate(page)) - goto page_ok; - - if (inode->i_blkbits == PAGE_SHIFT || - !mapping->a_ops->is_partially_uptodate) - goto page_not_up_to_date; - if (!trylock_page(page)) - goto page_not_up_to_date; - /* Did it get truncated before we got the lock? */ - if (!page->mapping) - goto page_not_up_to_date_locked; - if (!mapping->a_ops->is_partially_uptodate(page, - offset, iter->count)) - goto page_not_up_to_date_locked; - unlock_page(page); - } -page_ok: - /* - * i_size must be checked after we know the page is Uptodate. - * - * Checking i_size after the check allows us to calculate - * the correct value for "nr", which means the zero-filled - * part of the page is not copied back to userspace (unless - * another truncate extends the file - this is desired though). - */ - - isize = i_size_read(inode); - end_index = (isize - 1) >> PAGE_SHIFT; - if (unlikely(!isize || index > end_index)) { - put_page(page); - goto out; - } - - /* nr is the maximum number of bytes to copy from this page */ - nr = PAGE_SIZE; - if (index == end_index) { - nr = ((isize - 1) & ~PAGE_MASK) + 1; - if (nr <= offset) { - put_page(page); - goto out; - } + ret = get_page_for_read(filp, offset, iter->count, index, + &page); + if (ret <= 0) { + error = ret; + break; } - nr = nr - offset; + nr = ret; /* If users can be writing to this page using arbitrary * virtual addresses, take care about potential aliasing * before reading the page on the kernel side. */ - if (mapping_writably_mapped(mapping)) + if (mapping_writably_mapped(filp->f_mapping)) flush_dcache_page(page); /* @@ -1782,104 +1882,13 @@ page_ok: put_page(page); written += ret; if (!iov_iter_count(iter)) - goto out; + break; if (ret < nr) { error = -EFAULT; - goto out; - } - continue; - -page_not_up_to_date: - /* Get exclusive access to the page ... */ - error = lock_page_killable(page); - if (unlikely(error)) - goto readpage_error; - -page_not_up_to_date_locked: - /* Did it get truncated before we got the lock? */ - if (!page->mapping) { - unlock_page(page); - put_page(page); - continue; - } - - /* Did somebody else fill it already? */ - if (PageUptodate(page)) { - unlock_page(page); - goto page_ok; - } - -readpage: - /* - * A previous I/O error may have been due to temporary - * failures, eg. multipath errors. - * PG_error will be set again if readpage fails. - */ - ClearPageError(page); - /* Start the actual read. The read will unlock the page. */ - error = mapping->a_ops->readpage(filp, page); - - if (unlikely(error)) { - if (error == AOP_TRUNCATED_PAGE) { - put_page(page); - error = 0; - goto find_page; - } - goto readpage_error; - } - - if (!PageUptodate(page)) { - error = lock_page_killable(page); - if (unlikely(error)) - goto readpage_error; - if (!PageUptodate(page)) { - if (page->mapping == NULL) { - /* - * invalidate_mapping_pages got it - */ - unlock_page(page); - put_page(page); - goto find_page; - } - unlock_page(page); - shrink_readahead_size_eio(filp, ra); - error = -EIO; - goto readpage_error; - } - unlock_page(page); - } - - goto page_ok; - -readpage_error: - /* UHHUH! A synchronous read error occurred. Report it */ - put_page(page); - goto out; - -no_cached_page: - /* - * Ok, it wasn't cached, so we need to create a new - * page.. - */ - page = page_cache_alloc_cold(mapping); - if (!page) { - error = -ENOMEM; - goto out; - } - error = add_to_page_cache_lru(page, mapping, index, - mapping_gfp_constraint(mapping, GFP_KERNEL)); - if (error) { - put_page(page); - if (error == -EEXIST) { - error = 0; - goto find_page; - } - goto out; + break; } - goto readpage; } -out: ra->prev_pos = prev_index; ra->prev_pos <<= PAGE_SHIFT; ra->prev_pos |= prev_offset;