From patchwork Thu Jun 1 09:32:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 9759019 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3F7E8603F7 for ; Thu, 1 Jun 2017 09:35:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1F00B284F9 for ; Thu, 1 Jun 2017 09:35:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 13BFF284F1; Thu, 1 Jun 2017 09:35:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA90C284C3 for ; Thu, 1 Jun 2017 09:35:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751858AbdFAJfJ (ORCPT ); Thu, 1 Jun 2017 05:35:09 -0400 Received: from mx2.suse.de ([195.135.220.15]:50780 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751664AbdFAJdV (ORCPT ); Thu, 1 Jun 2017 05:33:21 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 5C09CADF4; Thu, 1 Jun 2017 09:33:15 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 663451E35B6; Thu, 1 Jun 2017 11:33:12 +0200 (CEST) From: Jan Kara To: Cc: Hugh Dickins , David Howells , linux-afs@lists.infradead.org, Ryusuke Konishi , linux-nilfs@vger.kernel.org, Bob Peterson , cluster-devel@redhat.com, Jaegeuk Kim , linux-f2fs-devel@lists.sourceforge.net, tytso@mit.edu, linux-ext4@vger.kernel.org, Ilya Dryomov , "Yan, Zheng" , ceph-devel@vger.kernel.org, linux-btrfs@vger.kernel.org, David Sterba , "Darrick J . Wong" , linux-xfs@vger.kernel.org, Nadia Yvette Chambers , Jan Kara Subject: [PATCH 33/35] mm: Remove nr_entries argument from pagevec_lookup_entries{, _range}() Date: Thu, 1 Jun 2017 11:32:43 +0200 Message-Id: <20170601093245.29238-34-jack@suse.cz> X-Mailer: git-send-email 2.12.3 In-Reply-To: <20170601093245.29238-1-jack@suse.cz> References: <20170601093245.29238-1-jack@suse.cz> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP All users pass PAGEVEC_SIZE as the number of entries now. Remove the argument. Signed-off-by: Jan Kara --- include/linux/pagevec.h | 7 +++---- mm/shmem.c | 4 ++-- mm/swap.c | 6 ++---- mm/truncate.c | 8 ++++---- 4 files changed, 11 insertions(+), 14 deletions(-) diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 93308689d6a7..f765fc5eca31 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -25,14 +25,13 @@ void __pagevec_lru_add(struct pagevec *pvec); unsigned pagevec_lookup_entries_range(struct pagevec *pvec, struct address_space *mapping, pgoff_t *start, pgoff_t end, - unsigned nr_entries, pgoff_t *indices); + pgoff_t *indices); static inline unsigned pagevec_lookup_entries(struct pagevec *pvec, struct address_space *mapping, - pgoff_t *start, unsigned nr_entries, - pgoff_t *indices) + pgoff_t *start, pgoff_t *indices) { return pagevec_lookup_entries_range(pvec, mapping, start, (pgoff_t)-1, - nr_entries, indices); + indices); } void pagevec_remove_exceptionals(struct pagevec *pvec); unsigned pagevec_lookup_range(struct pagevec *pvec, diff --git a/mm/shmem.c b/mm/shmem.c index e5ea044aae24..dd8144230ecf 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -769,7 +769,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, index = start; while (index < end) { if (!pagevec_lookup_entries_range(&pvec, mapping, &index, - end - 1, PAGEVEC_SIZE, indices)) + end - 1, indices)) break; for (i = 0; i < pagevec_count(&pvec); i++) { struct page *page = pvec.pages[i]; @@ -857,7 +857,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, cond_resched(); if (!pagevec_lookup_entries_range(&pvec, mapping, &index, - end - 1, PAGEVEC_SIZE, indices)) { + end - 1, indices)) { /* If all gone or hole-punch or unfalloc, we're done */ if (lookup_start == start || end != -1) break; diff --git a/mm/swap.c b/mm/swap.c index 88c7eb4e97db..1640bbb34e59 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -894,7 +894,6 @@ EXPORT_SYMBOL(__pagevec_lru_add); * @mapping: The address_space to search * @start: The starting entry index * @end: The final entry index (inclusive) - * @nr_entries: The maximum number of entries * @indices: The cache indices corresponding to the entries in @pvec * * pagevec_lookup_entries() will search for and return a group of up @@ -911,10 +910,9 @@ EXPORT_SYMBOL(__pagevec_lru_add); */ unsigned pagevec_lookup_entries_range(struct pagevec *pvec, struct address_space *mapping, - pgoff_t *start, pgoff_t end, unsigned nr_pages, - pgoff_t *indices) + pgoff_t *start, pgoff_t end, pgoff_t *indices) { - pvec->nr = find_get_entries_range(mapping, start, end, nr_pages, + pvec->nr = find_get_entries_range(mapping, start, end, PAGEVEC_SIZE, pvec->pages, indices); return pagevec_count(pvec); } diff --git a/mm/truncate.c b/mm/truncate.c index 31d5c5f3da30..d35531d83cb3 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -290,7 +290,7 @@ void truncate_inode_pages_range(struct address_space *mapping, pagevec_init(&pvec, 0); index = start; while (index < end && pagevec_lookup_entries_range(&pvec, mapping, - &index, end - 1, PAGEVEC_SIZE, indices)) { + &index, end - 1, indices)) { for (i = 0; i < pagevec_count(&pvec); i++) { struct page *page = pvec.pages[i]; @@ -354,7 +354,7 @@ void truncate_inode_pages_range(struct address_space *mapping, cond_resched(); if (!pagevec_lookup_entries_range(&pvec, mapping, &index, - end - 1, PAGEVEC_SIZE, indices)) { + end - 1, indices)) { /* If all gone from start onwards, we're done */ if (lookup_start == start) break; @@ -476,7 +476,7 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping, pagevec_init(&pvec, 0); while (index <= end && pagevec_lookup_entries_range(&pvec, mapping, - &index, end, PAGEVEC_SIZE, indices)) { + &index, end, indices)) { for (i = 0; i < pagevec_count(&pvec); i++) { struct page *page = pvec.pages[i]; @@ -601,7 +601,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping, pagevec_init(&pvec, 0); index = start; while (index <= end && pagevec_lookup_entries_range(&pvec, mapping, - &index, end, PAGEVEC_SIZE, indices)) { + &index, end, indices)) { for (i = 0; i < pagevec_count(&pvec); i++) { struct page *page = pvec.pages[i];