From patchwork Wed Jan 27 08:03:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 12049237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFEA6C433E6 for ; Wed, 27 Jan 2021 08:09:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C65020724 for ; Wed, 27 Jan 2021 08:09:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231405AbhA0IJT (ORCPT ); Wed, 27 Jan 2021 03:09:19 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:37252 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234435AbhA0IE6 (ORCPT ); Wed, 27 Jan 2021 03:04:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611734611; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc; bh=C7JSaegbH3sjrXxobnyvZOPnCDkpgSHCAZrjn1vGB3A=; b=F3gUOVQO3zUoBE2uxIhsslpTvOnt9DJRAE2SXUtCcEGeYnPf0upsV5msz3SwiQ3KyRZc2e ljKQYz91ICMBfCAU59nlUF6jewc3Gxp8C1PiPJ5A5u9Hf7E7SNdsFUYJT/XE3Q02nPLBZQ H1jRd71TCFSUFFtkgtndF5xzrgnJS/I= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-410-WyjPaEqWOke9YcV3yt6sbA-1; Wed, 27 Jan 2021 03:03:29 -0500 X-MC-Unique: WyjPaEqWOke9YcV3yt6sbA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B445C801AB8; Wed, 27 Jan 2021 08:03:28 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-111.rdu2.redhat.com [10.10.112.111]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3B8D26F92F; Wed, 27 Jan 2021 08:03:28 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org Subject: [PATCH 1/8] NFS: Clean up nfs_readpage() and nfs_readpages() Date: Wed, 27 Jan 2021 03:03:10 -0500 Message-Id: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org In prep for the new fscache netfs API, refactor nfs_readpage() and nfs_readpages() for future patches. No functional change. Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 45 +++++++++++++++++++++++---------------------- 1 file changed, 23 insertions(+), 22 deletions(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index eb854f1f86e2..dd92156e27c5 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -314,7 +314,7 @@ int nfs_readpage(struct file *file, struct page *page) { struct nfs_open_context *ctx; struct inode *inode = page_file_mapping(page)->host; - int error; + int ret; dprintk("NFS: nfs_readpage (%p %ld@%lu)\n", page, PAGE_SIZE, page_index(page)); @@ -328,18 +328,18 @@ int nfs_readpage(struct file *file, struct page *page) * be any new pending writes generated at this point * for this page (other pages can be written to). */ - error = nfs_wb_page(inode, page); - if (error) + ret = nfs_wb_page(inode, page); + if (ret) goto out_unlock; if (PageUptodate(page)) goto out_unlock; - error = -ESTALE; + ret = -ESTALE; if (NFS_STALE(inode)) goto out_unlock; if (file == NULL) { - error = -EBADF; + ret = -EBADF; ctx = nfs_find_open_context(inode, NULL, FMODE_READ); if (ctx == NULL) goto out_unlock; @@ -347,24 +347,24 @@ int nfs_readpage(struct file *file, struct page *page) ctx = get_nfs_open_context(nfs_file_open_context(file)); if (!IS_SYNC(inode)) { - error = nfs_readpage_from_fscache(ctx, inode, page); - if (error == 0) + ret = nfs_readpage_from_fscache(ctx, inode, page); + if (ret == 0) goto out; } xchg(&ctx->error, 0); - error = nfs_readpage_async(ctx, inode, page); - if (!error) { - error = wait_on_page_locked_killable(page); - if (!PageUptodate(page) && !error) - error = xchg(&ctx->error, 0); + ret = nfs_readpage_async(ctx, inode, page); + if (!ret) { + ret = wait_on_page_locked_killable(page); + if (!PageUptodate(page) && !ret) + ret = xchg(&ctx->error, 0); } out: put_nfs_open_context(ctx); - return error; + return ret; out_unlock: unlock_page(page); - return error; + return ret; } struct nfs_readdesc { @@ -404,17 +404,15 @@ struct nfs_readdesc { return error; } -int nfs_readpages(struct file *filp, struct address_space *mapping, +int nfs_readpages(struct file *file, struct address_space *mapping, struct list_head *pages, unsigned nr_pages) { struct nfs_pageio_descriptor pgio; struct nfs_pgio_mirror *pgm; - struct nfs_readdesc desc = { - .pgio = &pgio, - }; + struct nfs_readdesc desc; struct inode *inode = mapping->host; unsigned long npages; - int ret = -ESTALE; + int ret; dprintk("NFS: nfs_readpages (%s/%Lu %d)\n", inode->i_sb->s_id, @@ -422,15 +420,17 @@ int nfs_readpages(struct file *filp, struct address_space *mapping, nr_pages); nfs_inc_stats(inode, NFSIOS_VFSREADPAGES); + ret = -ESTALE; if (NFS_STALE(inode)) goto out; - if (filp == NULL) { + if (file == NULL) { + ret = -EBADF; desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ); if (desc.ctx == NULL) - return -EBADF; + goto out; } else - desc.ctx = get_nfs_open_context(nfs_file_open_context(filp)); + desc.ctx = get_nfs_open_context(nfs_file_open_context(file)); /* attempt to read as many of the pages as possible from the cache * - this returns -ENOBUFS immediately if the cookie is negative @@ -440,6 +440,7 @@ int nfs_readpages(struct file *filp, struct address_space *mapping, if (ret == 0) goto read_complete; /* all pages were read */ + desc.pgio = &pgio; nfs_pageio_init_read(&pgio, inode, false, &nfs_async_read_completion_ops); From patchwork Wed Jan 27 08:03:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 12049239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E374C433E0 for ; Wed, 27 Jan 2021 08:10:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5964520724 for ; Wed, 27 Jan 2021 08:10:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231668AbhA0IKn (ORCPT ); Wed, 27 Jan 2021 03:10:43 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:31880 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234460AbhA0IE6 (ORCPT ); Wed, 27 Jan 2021 03:04:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611734612; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=y1kmH+B7NLmOeBWZSsj7F/0dKzsLTNAZN/u+CN4Js9M=; b=E8abhyOwirGCRtr7uF2MeU7Y3SDU/XjEdMhbE+0McMn3RaLzSAlJ7h19fOcc5jM05lO2cI WKCiP6DkUWzSOO8aNHRFCh4tGh8m78t/VqOzoRQsBS0S7xhOBgWip46DnsrkqEuuJXXroP nCA+RH5kNJHV6/WZMxAHqTe4mazAWCM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-476-IsM6GmFJOBKINBIkLuTgww-1; Wed, 27 Jan 2021 03:03:30 -0500 X-MC-Unique: IsM6GmFJOBKINBIkLuTgww-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7582D107ACE3; Wed, 27 Jan 2021 08:03:29 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-111.rdu2.redhat.com [10.10.112.111]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 00CDE1F0; Wed, 27 Jan 2021 08:03:28 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org Subject: [PATCH 2/8] NFS: In nfs_readpage() only increment NFSIOS_READPAGES when read succeeds Date: Wed, 27 Jan 2021 03:03:11 -0500 Message-Id: <1611734597-14754-3-git-send-email-dwysocha@redhat.com> In-Reply-To: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> References: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org There is a small inconsistency with nfs_readpage() vs nfs_readpages() with regards to NFSIOS_READPAGES. In readpage we unconditionally increment NFSIOS_READPAGES at the top, which means even if the read fails. In readpages, we increment NFSIOS_READPAGES at the bottom based on how many pages were successfully read. Change readpage to be consistent with readpages and so NFSIOS_READPAGES only reflects successful, non-fscache reads. Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index dd92156e27c5..464077daf62f 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -319,7 +319,6 @@ int nfs_readpage(struct file *file, struct page *page) dprintk("NFS: nfs_readpage (%p %ld@%lu)\n", page, PAGE_SIZE, page_index(page)); nfs_inc_stats(inode, NFSIOS_VFSREADPAGE); - nfs_add_stats(inode, NFSIOS_READPAGES, 1); /* * Try to flush any pending writes to the file.. @@ -359,6 +358,7 @@ int nfs_readpage(struct file *file, struct page *page) if (!PageUptodate(page) && !ret) ret = xchg(&ctx->error, 0); } + nfs_add_stats(inode, NFSIOS_READPAGES, 1); out: put_nfs_open_context(ctx); return ret; From patchwork Wed Jan 27 08:03:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 12049235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D823C433E0 for ; Wed, 27 Jan 2021 08:09:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 353C320724 for ; Wed, 27 Jan 2021 08:09:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234617AbhA0IJL (ORCPT ); Wed, 27 Jan 2021 03:09:11 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:57025 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232847AbhA0IE7 (ORCPT ); Wed, 27 Jan 2021 03:04:59 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611734612; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=jXiussDE9pawkwPleZIZ+iyM+09zou2Q27BfbFjy49Y=; b=IcVZXpaL3+FGBm6fdympdSIv5V7gxwxYQmv+EQS7ro2pADUHJAHkoboDE2/qY1jA4aT+X0 VZ+Ae/5lTWJcaOwRXyXhGx4itbEgQf9UR7DwK2kkkFN/rnxYqY2vUbiDroHs7aPdwKEf4C yuXEJ7lN8zACA8NGIbr2ak2y8ujYG20= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-427-ekVoVdTsPaykUq36pY8Fzw-1; Wed, 27 Jan 2021 03:03:31 -0500 X-MC-Unique: ekVoVdTsPaykUq36pY8Fzw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 11A26801AAF; Wed, 27 Jan 2021 08:03:30 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-111.rdu2.redhat.com [10.10.112.111]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 94E531F0; Wed, 27 Jan 2021 08:03:29 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org Subject: [PATCH 3/8] NFS: Refactor nfs_readpage() and nfs_readpage_async() to use nfs_readdesc Date: Wed, 27 Jan 2021 03:03:12 -0500 Message-Id: <1611734597-14754-4-git-send-email-dwysocha@redhat.com> In-Reply-To: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> References: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Both nfs_readpage() and nfs_readpages() use similar code. This patch should be no functional change, and refactors nfs_readpage_async() to use nfs_readdesc to enable future merging of nfs_readpage_async() and nfs_readpage_async_filler(). Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 62 ++++++++++++++++++++++++-------------------------- include/linux/nfs_fs.h | 3 +-- 2 files changed, 31 insertions(+), 34 deletions(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 464077daf62f..8c05e56dab65 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -114,18 +114,23 @@ static void nfs_readpage_release(struct nfs_page *req, int error) nfs_release_request(req); } -int nfs_readpage_async(struct nfs_open_context *ctx, struct inode *inode, +struct nfs_readdesc { + struct nfs_pageio_descriptor pgio; + struct nfs_open_context *ctx; +}; + +int nfs_readpage_async(void *data, struct inode *inode, struct page *page) { + struct nfs_readdesc *desc = data; struct nfs_page *new; unsigned int len; - struct nfs_pageio_descriptor pgio; struct nfs_pgio_mirror *pgm; len = nfs_page_length(page); if (len == 0) return nfs_return_empty_page(page); - new = nfs_create_request(ctx, page, 0, len); + new = nfs_create_request(desc->ctx, page, 0, len); if (IS_ERR(new)) { unlock_page(page); return PTR_ERR(new); @@ -133,21 +138,21 @@ int nfs_readpage_async(struct nfs_open_context *ctx, struct inode *inode, if (len < PAGE_SIZE) zero_user_segment(page, len, PAGE_SIZE); - nfs_pageio_init_read(&pgio, inode, false, + nfs_pageio_init_read(&desc->pgio, inode, false, &nfs_async_read_completion_ops); - if (!nfs_pageio_add_request(&pgio, new)) { + if (!nfs_pageio_add_request(&desc->pgio, new)) { nfs_list_remove_request(new); - nfs_readpage_release(new, pgio.pg_error); + nfs_readpage_release(new, desc->pgio.pg_error); } - nfs_pageio_complete(&pgio); + nfs_pageio_complete(&desc->pgio); /* It doesn't make sense to do mirrored reads! */ - WARN_ON_ONCE(pgio.pg_mirror_count != 1); + WARN_ON_ONCE(desc->pgio.pg_mirror_count != 1); - pgm = &pgio.pg_mirrors[0]; + pgm = &desc->pgio.pg_mirrors[0]; NFS_I(inode)->read_io += pgm->pg_bytes_written; - return pgio.pg_error < 0 ? pgio.pg_error : 0; + return desc->pgio.pg_error < 0 ? desc->pgio.pg_error : 0; } static void nfs_page_group_set_uptodate(struct nfs_page *req) @@ -312,7 +317,7 @@ static void nfs_readpage_result(struct rpc_task *task, */ int nfs_readpage(struct file *file, struct page *page) { - struct nfs_open_context *ctx; + struct nfs_readdesc desc; struct inode *inode = page_file_mapping(page)->host; int ret; @@ -339,39 +344,34 @@ int nfs_readpage(struct file *file, struct page *page) if (file == NULL) { ret = -EBADF; - ctx = nfs_find_open_context(inode, NULL, FMODE_READ); - if (ctx == NULL) + desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ); + if (desc.ctx == NULL) goto out_unlock; } else - ctx = get_nfs_open_context(nfs_file_open_context(file)); + desc.ctx = get_nfs_open_context(nfs_file_open_context(file)); if (!IS_SYNC(inode)) { - ret = nfs_readpage_from_fscache(ctx, inode, page); + ret = nfs_readpage_from_fscache(desc.ctx, inode, page); if (ret == 0) goto out; } - xchg(&ctx->error, 0); - ret = nfs_readpage_async(ctx, inode, page); + xchg(&desc.ctx->error, 0); + ret = nfs_readpage_async(&desc, inode, page); if (!ret) { ret = wait_on_page_locked_killable(page); if (!PageUptodate(page) && !ret) - ret = xchg(&ctx->error, 0); + ret = xchg(&desc.ctx->error, 0); } nfs_add_stats(inode, NFSIOS_READPAGES, 1); out: - put_nfs_open_context(ctx); + put_nfs_open_context(desc.ctx); return ret; out_unlock: unlock_page(page); return ret; } -struct nfs_readdesc { - struct nfs_pageio_descriptor *pgio; - struct nfs_open_context *ctx; -}; - static int readpage_async_filler(void *data, struct page *page) { @@ -390,9 +390,9 @@ struct nfs_readdesc { if (len < PAGE_SIZE) zero_user_segment(page, len, PAGE_SIZE); - if (!nfs_pageio_add_request(desc->pgio, new)) { + if (!nfs_pageio_add_request(&desc->pgio, new)) { nfs_list_remove_request(new); - error = desc->pgio->pg_error; + error = desc->pgio.pg_error; nfs_readpage_release(new, error); goto out; } @@ -407,7 +407,6 @@ struct nfs_readdesc { int nfs_readpages(struct file *file, struct address_space *mapping, struct list_head *pages, unsigned nr_pages) { - struct nfs_pageio_descriptor pgio; struct nfs_pgio_mirror *pgm; struct nfs_readdesc desc; struct inode *inode = mapping->host; @@ -440,17 +439,16 @@ int nfs_readpages(struct file *file, struct address_space *mapping, if (ret == 0) goto read_complete; /* all pages were read */ - desc.pgio = &pgio; - nfs_pageio_init_read(&pgio, inode, false, + nfs_pageio_init_read(&desc.pgio, inode, false, &nfs_async_read_completion_ops); ret = read_cache_pages(mapping, pages, readpage_async_filler, &desc); - nfs_pageio_complete(&pgio); + nfs_pageio_complete(&desc.pgio); /* It doesn't make sense to do mirrored reads! */ - WARN_ON_ONCE(pgio.pg_mirror_count != 1); + WARN_ON_ONCE(desc.pgio.pg_mirror_count != 1); - pgm = &pgio.pg_mirrors[0]; + pgm = &desc.pgio.pg_mirrors[0]; NFS_I(inode)->read_io += pgm->pg_bytes_written; npages = (pgm->pg_bytes_written + PAGE_SIZE - 1) >> PAGE_SHIFT; diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index 681ed98e4ba8..cb0248a34518 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -570,8 +570,7 @@ extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, s extern int nfs_readpage(struct file *, struct page *); extern int nfs_readpages(struct file *, struct address_space *, struct list_head *, unsigned); -extern int nfs_readpage_async(struct nfs_open_context *, struct inode *, - struct page *); +extern int nfs_readpage_async(void *, struct inode *, struct page *); /* * inline functions From patchwork Wed Jan 27 08:03:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 12049233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBE0AC433E9 for ; Wed, 27 Jan 2021 08:09:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA19D20724 for ; Wed, 27 Jan 2021 08:09:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234469AbhA0IJG (ORCPT ); Wed, 27 Jan 2021 03:09:06 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:30434 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234617AbhA0IE7 (ORCPT ); Wed, 27 Jan 2021 03:04:59 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611734613; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=0bjdnswoGpW7h5fO2+La1vj2ABJF/GMGpdlerYTu6Pk=; b=XMT0SfaD6su8/VZ5fXeGSttv+mjBeGiKpKu448jSlwsTZoR+xc5HDzkBfNmQUZzkobn/p3 tq0F7c6HU2fWF3b+ssYWpRa0bLHoZAufKZwYjS2LPu7ifEIhSo75sonWzAgbhd+55FZzX3 j/cYHk8eW1dJSytqrYGzp3mKxLyJTcM= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-547-ta6FO6xiNtiW3tv44e0HEA-1; Wed, 27 Jan 2021 03:03:31 -0500 X-MC-Unique: ta6FO6xiNtiW3tv44e0HEA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A6A4B180A094; Wed, 27 Jan 2021 08:03:30 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-111.rdu2.redhat.com [10.10.112.111]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 375391F0; Wed, 27 Jan 2021 08:03:30 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org Subject: [PATCH 4/8] NFS: Call readpage_async_filler() from nfs_readpage_async() Date: Wed, 27 Jan 2021 03:03:13 -0500 Message-Id: <1611734597-14754-5-git-send-email-dwysocha@redhat.com> In-Reply-To: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> References: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Refactor slightly so nfs_readpage_async() calls into readpage_async_filler(). Signed-off-by: Dave Wysochanski --- fs/nfs/read.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-) diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 8c05e56dab65..0ed79e6bc486 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -119,31 +119,22 @@ struct nfs_readdesc { struct nfs_open_context *ctx; }; +static int readpage_async_filler(void *data, struct page *page); + int nfs_readpage_async(void *data, struct inode *inode, struct page *page) { struct nfs_readdesc *desc = data; - struct nfs_page *new; - unsigned int len; struct nfs_pgio_mirror *pgm; - - len = nfs_page_length(page); - if (len == 0) - return nfs_return_empty_page(page); - new = nfs_create_request(desc->ctx, page, 0, len); - if (IS_ERR(new)) { - unlock_page(page); - return PTR_ERR(new); - } - if (len < PAGE_SIZE) - zero_user_segment(page, len, PAGE_SIZE); + int error; nfs_pageio_init_read(&desc->pgio, inode, false, &nfs_async_read_completion_ops); - if (!nfs_pageio_add_request(&desc->pgio, new)) { - nfs_list_remove_request(new); - nfs_readpage_release(new, desc->pgio.pg_error); - } + + error = readpage_async_filler(desc, page); + if (error) + goto out; + nfs_pageio_complete(&desc->pgio); /* It doesn't make sense to do mirrored reads! */ @@ -153,6 +144,9 @@ int nfs_readpage_async(void *data, struct inode *inode, NFS_I(inode)->read_io += pgm->pg_bytes_written; return desc->pgio.pg_error < 0 ? desc->pgio.pg_error : 0; + +out: + return error; } static void nfs_page_group_set_uptodate(struct nfs_page *req) From patchwork Wed Jan 27 08:03:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 12049231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31976C433DB for ; Wed, 27 Jan 2021 08:09:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB8A92074B for ; Wed, 27 Jan 2021 08:09:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234489AbhA0IJE (ORCPT ); Wed, 27 Jan 2021 03:09:04 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:46821 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234469AbhA0IFA (ORCPT ); Wed, 27 Jan 2021 03:05:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611734614; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=GqegtzEauQ47jneuF47CwVXnjZkhNfAS1QyCvi9fBjk=; b=Lu8+so5GzeDmjIyQytt8JdwzOSvL4duoLiD7GPSmiTafBbd2qkMlVBGT2DW6PaMlTcx8LU 3mSm/Tdg7QjS9FJLtT/I2mH4mtABruRK54Ab4zmESCQRUETG5xk2rvGoWoz5Uvwxua5qJw 03I4bCYVSpv3fRatZWCVB1dj+yMxBlc= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-27-xcHvINrzPVK-g-V7BNUw-Q-1; Wed, 27 Jan 2021 03:03:32 -0500 X-MC-Unique: xcHvINrzPVK-g-V7BNUw-Q-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4A1A7180A093; Wed, 27 Jan 2021 08:03:31 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-111.rdu2.redhat.com [10.10.112.111]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CD0901F0; Wed, 27 Jan 2021 08:03:30 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org Subject: [PATCH 5/8] NFS: Add nfs_pageio_complete_read() and remove nfs_readpage_async() Date: Wed, 27 Jan 2021 03:03:14 -0500 Message-Id: <1611734597-14754-6-git-send-email-dwysocha@redhat.com> In-Reply-To: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> References: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Add nfs_pageio_complete_read() and call this from both nfs_readpage() and nfs_readpages(), since the submission and accounting is the same for both functions. Signed-off-by: Dave Wysochanski --- fs/nfs/fscache.c | 4 -- fs/nfs/read.c | 137 ++++++++++++++++++++++--------------------------- include/linux/nfs_fs.h | 1 - 3 files changed, 61 insertions(+), 81 deletions(-) diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index a60df88efc40..c4c021c6ebbd 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -390,10 +390,6 @@ static void nfs_readpage_from_fscache_complete(struct page *page, if (!error) { SetPageUptodate(page); unlock_page(page); - } else { - error = nfs_readpage_async(context, page->mapping->host, page); - if (error) - unlock_page(page); } } diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 0ed79e6bc486..d2b6dce1f99f 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -74,6 +74,24 @@ void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, } EXPORT_SYMBOL_GPL(nfs_pageio_init_read); +static void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio, + struct inode *inode) +{ + struct nfs_pgio_mirror *pgm; + unsigned long npages; + + nfs_pageio_complete(pgio); + + /* It doesn't make sense to do mirrored reads! */ + WARN_ON_ONCE(pgio->pg_mirror_count != 1); + + pgm = &pgio->pg_mirrors[0]; + NFS_I(inode)->read_io += pgm->pg_bytes_written; + npages = (pgm->pg_bytes_written + PAGE_SIZE - 1) >> PAGE_SHIFT; + nfs_add_stats(inode, NFSIOS_READPAGES, npages); +} + + void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio) { struct nfs_pgio_mirror *mirror; @@ -119,36 +137,6 @@ struct nfs_readdesc { struct nfs_open_context *ctx; }; -static int readpage_async_filler(void *data, struct page *page); - -int nfs_readpage_async(void *data, struct inode *inode, - struct page *page) -{ - struct nfs_readdesc *desc = data; - struct nfs_pgio_mirror *pgm; - int error; - - nfs_pageio_init_read(&desc->pgio, inode, false, - &nfs_async_read_completion_ops); - - error = readpage_async_filler(desc, page); - if (error) - goto out; - - nfs_pageio_complete(&desc->pgio); - - /* It doesn't make sense to do mirrored reads! */ - WARN_ON_ONCE(desc->pgio.pg_mirror_count != 1); - - pgm = &desc->pgio.pg_mirrors[0]; - NFS_I(inode)->read_io += pgm->pg_bytes_written; - - return desc->pgio.pg_error < 0 ? desc->pgio.pg_error : 0; - -out: - return error; -} - static void nfs_page_group_set_uptodate(struct nfs_page *req) { if (nfs_page_group_sync_on_bit(req, PG_UPTODATE)) @@ -170,8 +158,7 @@ static void nfs_read_completion(struct nfs_pgio_header *hdr) if (test_bit(NFS_IOHDR_EOF, &hdr->flags)) { /* note: regions of the page not covered by a - * request are zeroed in nfs_readpage_async / - * readpage_async_filler */ + * request are zeroed in readpage_async_filler */ if (bytes > hdr->good_bytes) { /* nothing in this request was good, so zero * the full extent of the request */ @@ -303,6 +290,38 @@ static void nfs_readpage_result(struct rpc_task *task, nfs_readpage_retry(task, hdr); } +static int +readpage_async_filler(void *data, struct page *page) +{ + struct nfs_readdesc *desc = data; + struct nfs_page *new; + unsigned int len; + int error; + + len = nfs_page_length(page); + if (len == 0) + return nfs_return_empty_page(page); + + new = nfs_create_request(desc->ctx, page, 0, len); + if (IS_ERR(new)) + goto out_error; + + if (len < PAGE_SIZE) + zero_user_segment(page, len, PAGE_SIZE); + if (!nfs_pageio_add_request(&desc->pgio, new)) { + nfs_list_remove_request(new); + error = desc->pgio.pg_error; + nfs_readpage_release(new, error); + goto out; + } + return 0; +out_error: + error = PTR_ERR(new); + unlock_page(page); +out: + return error; +} + /* * Read a page over NFS. * We read the page synchronously in the following case: @@ -351,13 +370,20 @@ int nfs_readpage(struct file *file, struct page *page) } xchg(&desc.ctx->error, 0); - ret = nfs_readpage_async(&desc, inode, page); + nfs_pageio_init_read(&desc.pgio, inode, false, + &nfs_async_read_completion_ops); + + ret = readpage_async_filler(&desc, page); + + if (!ret) + nfs_pageio_complete_read(&desc.pgio, inode); + + ret = desc.pgio.pg_error < 0 ? desc.pgio.pg_error : 0; if (!ret) { ret = wait_on_page_locked_killable(page); if (!PageUptodate(page) && !ret) ret = xchg(&desc.ctx->error, 0); } - nfs_add_stats(inode, NFSIOS_READPAGES, 1); out: put_nfs_open_context(desc.ctx); return ret; @@ -366,45 +392,11 @@ int nfs_readpage(struct file *file, struct page *page) return ret; } -static int -readpage_async_filler(void *data, struct page *page) -{ - struct nfs_readdesc *desc = (struct nfs_readdesc *)data; - struct nfs_page *new; - unsigned int len; - int error; - - len = nfs_page_length(page); - if (len == 0) - return nfs_return_empty_page(page); - - new = nfs_create_request(desc->ctx, page, 0, len); - if (IS_ERR(new)) - goto out_error; - - if (len < PAGE_SIZE) - zero_user_segment(page, len, PAGE_SIZE); - if (!nfs_pageio_add_request(&desc->pgio, new)) { - nfs_list_remove_request(new); - error = desc->pgio.pg_error; - nfs_readpage_release(new, error); - goto out; - } - return 0; -out_error: - error = PTR_ERR(new); - unlock_page(page); -out: - return error; -} - int nfs_readpages(struct file *file, struct address_space *mapping, struct list_head *pages, unsigned nr_pages) { - struct nfs_pgio_mirror *pgm; struct nfs_readdesc desc; struct inode *inode = mapping->host; - unsigned long npages; int ret; dprintk("NFS: nfs_readpages (%s/%Lu %d)\n", @@ -437,16 +429,9 @@ int nfs_readpages(struct file *file, struct address_space *mapping, &nfs_async_read_completion_ops); ret = read_cache_pages(mapping, pages, readpage_async_filler, &desc); - nfs_pageio_complete(&desc.pgio); - /* It doesn't make sense to do mirrored reads! */ - WARN_ON_ONCE(desc.pgio.pg_mirror_count != 1); + nfs_pageio_complete_read(&desc.pgio, inode); - pgm = &desc.pgio.pg_mirrors[0]; - NFS_I(inode)->read_io += pgm->pg_bytes_written; - npages = (pgm->pg_bytes_written + PAGE_SIZE - 1) >> - PAGE_SHIFT; - nfs_add_stats(inode, NFSIOS_READPAGES, npages); read_complete: put_nfs_open_context(desc.ctx); out: diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index cb0248a34518..3cfcf219e96b 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -570,7 +570,6 @@ extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, s extern int nfs_readpage(struct file *, struct page *); extern int nfs_readpages(struct file *, struct address_space *, struct list_head *, unsigned); -extern int nfs_readpage_async(void *, struct inode *, struct page *); /* * inline functions From patchwork Wed Jan 27 08:03:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 12049229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7454C433E0 for ; Wed, 27 Jan 2021 08:09:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7AB3A20724 for ; Wed, 27 Jan 2021 08:09:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231911AbhA0II7 (ORCPT ); Wed, 27 Jan 2021 03:08:59 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:50039 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234489AbhA0IFA (ORCPT ); Wed, 27 Jan 2021 03:05:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611734614; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=gHJF4jk8KhLzZHekQ9mnv2EKCBiL8OEfeMhNYWKWB0U=; b=B/orre4vy5nCx0eIerkfmaNuuNFqREDhIZ3cnFclWB10jQ1PlOqdnWIZi9IFR0Uk6h7bqU HIRKE/8MDfvWf31NzzrlyqQyw0cV4XWed+Src7Lke2ReSmUYXOHgWzyiX6ZOryFui3mAo2 LHMxATiVpuP3jPrJcA1pIRhJTSo/Nq4= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-553-YO0ZBW8kMEOirYUzyhTSqA-1; Wed, 27 Jan 2021 03:03:33 -0500 X-MC-Unique: YO0ZBW8kMEOirYUzyhTSqA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E040C801AA7; Wed, 27 Jan 2021 08:03:31 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-111.rdu2.redhat.com [10.10.112.111]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 706ED1F0; Wed, 27 Jan 2021 08:03:31 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org Subject: [PATCH 6/8] NFS: Allow internal use of read structs and functions Date: Wed, 27 Jan 2021 03:03:15 -0500 Message-Id: <1611734597-14754-7-git-send-email-dwysocha@redhat.com> In-Reply-To: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> References: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The conversion of the NFS read paths to the new fscache API will require use of a few read structs and functions, so move these declarations as required. Signed-off-by: Dave Wysochanski --- fs/nfs/internal.h | 8 ++++++++ fs/nfs/read.c | 13 ++++--------- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 62d3189745cd..8514d002c922 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -457,9 +457,17 @@ extern char *nfs_path(char **p, struct dentry *dentry, struct nfs_pgio_completion_ops; /* read.c */ +extern const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; extern void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, struct inode *inode, bool force_mds, const struct nfs_pgio_completion_ops *compl_ops); +struct nfs_readdesc { + struct nfs_pageio_descriptor pgio; + struct nfs_open_context *ctx; +}; +extern int readpage_async_filler(void *data, struct page *page); +extern void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio, + struct inode *inode); extern void nfs_read_prepare(struct rpc_task *task, void *calldata); extern void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio); diff --git a/fs/nfs/read.c b/fs/nfs/read.c index d2b6dce1f99f..9618abf01136 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -30,7 +30,7 @@ #define NFSDBG_FACILITY NFSDBG_PAGECACHE -static const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; +const struct nfs_pgio_completion_ops nfs_async_read_completion_ops; static const struct nfs_rw_ops nfs_rw_read_ops; static struct kmem_cache *nfs_rdata_cachep; @@ -74,7 +74,7 @@ void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio, } EXPORT_SYMBOL_GPL(nfs_pageio_init_read); -static void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio, +void nfs_pageio_complete_read(struct nfs_pageio_descriptor *pgio, struct inode *inode) { struct nfs_pgio_mirror *pgm; @@ -132,11 +132,6 @@ static void nfs_readpage_release(struct nfs_page *req, int error) nfs_release_request(req); } -struct nfs_readdesc { - struct nfs_pageio_descriptor pgio; - struct nfs_open_context *ctx; -}; - static void nfs_page_group_set_uptodate(struct nfs_page *req) { if (nfs_page_group_sync_on_bit(req, PG_UPTODATE)) @@ -215,7 +210,7 @@ static void nfs_initiate_read(struct nfs_pgio_header *hdr, } } -static const struct nfs_pgio_completion_ops nfs_async_read_completion_ops = { +const struct nfs_pgio_completion_ops nfs_async_read_completion_ops = { .error_cleanup = nfs_async_read_error, .completion = nfs_read_completion, }; @@ -290,7 +285,7 @@ static void nfs_readpage_result(struct rpc_task *task, nfs_readpage_retry(task, hdr); } -static int +int readpage_async_filler(void *data, struct page *page) { struct nfs_readdesc *desc = data; From patchwork Wed Jan 27 08:03:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 12049227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E60AC433DB for ; Wed, 27 Jan 2021 08:08:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C2DB120724 for ; Wed, 27 Jan 2021 08:08:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232134AbhA0IIH (ORCPT ); Wed, 27 Jan 2021 03:08:07 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:52343 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234627AbhA0IFE (ORCPT ); Wed, 27 Jan 2021 03:05:04 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611734615; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=twVXUEhHd2JO+Bi7/xAnfSJFvhLX7mWhR/JSfp3lgYE=; b=RVdrcrgeUghcEsyCiQaMQna2aCIYKBC5ClSeenDQSnp7PSn+CX4PAKQXTWH/rbIFYQNMvd iUCajtXRLPSdYXctQ49iCmr6VuOfFqrx8WMFTCBci1WLmfEW2UXeAGnNIh/0Isd/nF9sUz aXZoTMJ48iudbXdM/hRnEJEdE0D2D0k= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-135-6yEP5xGbPu23VniJn6-xWw-1; Wed, 27 Jan 2021 03:03:33 -0500 X-MC-Unique: 6yEP5xGbPu23VniJn6-xWw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 85F6C180A099; Wed, 27 Jan 2021 08:03:32 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-111.rdu2.redhat.com [10.10.112.111]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 11A3B72163; Wed, 27 Jan 2021 08:03:31 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org Subject: [PATCH 7/8] NFS: Convert to the netfs API and nfs_readpage to use netfs_readpage Date: Wed, 27 Jan 2021 03:03:16 -0500 Message-Id: <1611734597-14754-8-git-send-email-dwysocha@redhat.com> In-Reply-To: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> References: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org This patch converts the main NFS read paths to the new netfs API, when fscache is enabled, and converts readpage while minimizing changes to the existing NFS read code paths. The netfs API requires a few functions to be provided by the netfs: - init_rreq: allows netfs to allocate resources prior to IO - is_cache_enabled: allows netfs to disable fscache - begin_cache_operation: signals the start of an fscache IO - issue_op: called when netfs should issue read to server - clamp_length: allows netfs to limit size of IO - cleanup: allows netfs to cleanup after an IO is complete The new netfs_readpage() API is called when fscache is enabled. If a read cannot be satisfied from fscache, the netfs is called back via issue_op() to obtain the data from the server. Once the read completes, the netfs must call netfs_subreq_terminated() which then may write the data to fscache. In order to call back into fscache via netfs_subreq_terminated(), we must save the netfs_read_subrequest* as a field in the nfs_pgio_header, similar to nfs_direct_req. If the netfs has a read IO limit (for example, NFS 'rsize' mount options) the clamp_length() function is called. Signed-off-by: Dave Wysochanski --- fs/nfs/fscache.c | 158 ++++++++++++++++++++++++++++++++--------------- fs/nfs/fscache.h | 44 +++---------- fs/nfs/pagelist.c | 2 + fs/nfs/read.c | 9 ++- include/linux/nfs_page.h | 1 + include/linux/nfs_xdr.h | 1 + 6 files changed, 127 insertions(+), 88 deletions(-) diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index c4c021c6ebbd..fede075209f5 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -15,6 +15,9 @@ #include #include #include +#include +#include +#include #include "internal.h" #include "iostat.h" @@ -373,62 +376,126 @@ void __nfs_fscache_invalidate_page(struct page *page, struct inode *inode) NFSIOS_FSCACHE_PAGES_UNCACHED); } -/* - * Handle completion of a page being read from the cache. - * - Called in process (keventd) context. - */ -static void nfs_readpage_from_fscache_complete(struct page *page, - void *context, - int error) +static void nfs_issue_op(struct netfs_read_subrequest *subreq) { - dfprintk(FSCACHE, - "NFS: readpage_from_fscache_complete (0x%p/0x%p/%d)\n", - page, context, error); - - /* if the read completes with an error, we just unlock the page and let - * the VM reissue the readpage */ - if (!error) { - SetPageUptodate(page); - unlock_page(page); + struct inode *inode = subreq->rreq->inode; + struct nfs_readdesc *desc = subreq->rreq->netfs_priv; + struct page *page; + pgoff_t start = (subreq->start + subreq->transferred) >> PAGE_SHIFT; + pgoff_t last = ((subreq->start + subreq->len - + subreq->transferred - 1) >> PAGE_SHIFT); + XA_STATE(xas, &subreq->rreq->mapping->i_pages, start); + + dfprintk(FSCACHE, "NFS: %s(fsc:%p s:%lu l:%lu) subreq->start: %lld " + "subreq->len: %ld subreq->transferred: %ld\n", + __func__, nfs_i_fscache(inode), start, last, subreq->start, + subreq->len, subreq->transferred); + + nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL, + last - start + 1); + nfs_pageio_init_read(&desc->pgio, inode, false, + &nfs_async_read_completion_ops); + + desc->pgio.pg_fsc = subreq; /* used in completion */ + + rcu_read_lock(); + xas_for_each(&xas, page, last) { + subreq->error = readpage_async_filler(desc, page); + if (subreq->error < 0) + break; + } + rcu_read_unlock(); + nfs_pageio_complete_read(&desc->pgio, inode); +} + +static bool nfs_clamp_length(struct netfs_read_subrequest *subreq) +{ + struct inode *inode = subreq->rreq->mapping->host; + unsigned int rsize = NFS_SB(inode->i_sb)->rsize; + + if (subreq->len > rsize) { + dfprintk(FSCACHE, + "NFS: %s(fsc:%p slen:%lu rsize: %u)\n", + __func__, nfs_i_fscache(inode), subreq->len, rsize); + subreq->len = rsize; } + + return true; +} + +static void nfs_cleanup(struct address_space *mapping, void *netfs_priv) +{ + ; /* fscache assumes if netfs_priv is given we have cleanup */ +} + +atomic_t nfs_fscache_debug_id; +static void nfs_init_rreq(struct netfs_read_request *rreq, struct file *file) +{ + struct nfs_inode *nfsi = NFS_I(rreq->inode); + + if (nfsi->fscache && test_bit(NFS_INO_FSCACHE, &nfsi->flags)) + rreq->cookie_debug_id = atomic_inc_return(&nfs_fscache_debug_id); +} + +static bool nfs_is_cache_enabled(struct inode *inode) +{ + struct nfs_inode *nfsi = NFS_I(inode); + + return nfsi->fscache && test_bit(NFS_INO_FSCACHE, &nfsi->flags); +} + +static int nfs_begin_cache_operation(struct netfs_read_request *rreq) +{ + struct fscache_cookie *cookie = NFS_I(rreq->inode)->fscache; + + return fscache_begin_read_operation(rreq, cookie); } +static struct netfs_read_request_ops nfs_fscache_req_ops = { + .init_rreq = nfs_init_rreq, + .is_cache_enabled = nfs_is_cache_enabled, + .begin_cache_operation = nfs_begin_cache_operation, + .issue_op = nfs_issue_op, + .clamp_length = nfs_clamp_length, + .cleanup = nfs_cleanup +}; + /* * Retrieve a page from fscache */ -int __nfs_readpage_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, struct page *page) +int nfs_readpage_from_fscache(struct file *file, + struct page *page, + struct nfs_readdesc *desc) { int ret; + struct inode *inode = file_inode(file); + + if (!NFS_I(file_inode(file))->fscache) + return -ENOBUFS; dfprintk(FSCACHE, "NFS: readpage_from_fscache(fsc:%p/p:%p(i:%lx f:%lx)/0x%p)\n", nfs_i_fscache(inode), page, page->index, page->flags, inode); - ret = fscache_read_or_alloc_page(nfs_i_fscache(inode), - page, - nfs_readpage_from_fscache_complete, - ctx, - GFP_KERNEL); + ret = netfs_readpage(file, page, &nfs_fscache_req_ops, desc); switch (ret) { - case 0: /* read BIO submitted (page in fscache) */ - dfprintk(FSCACHE, - "NFS: readpage_from_fscache: BIO submitted\n"); + case 0: /* read submitted */ + dfprintk(FSCACHE, "NFS: readpage_from_fscache: submitted\n"); nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_OK); return ret; case -ENOBUFS: /* inode not in cache */ case -ENODATA: /* page not in cache */ nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL); - dfprintk(FSCACHE, - "NFS: readpage_from_fscache %d\n", ret); + dfprintk(FSCACHE, "NFS: readpage_from_fscache %d\n", ret); return 1; default: dfprintk(FSCACHE, "NFS: readpage_from_fscache %d\n", ret); nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL); } + return ret; } @@ -449,7 +516,7 @@ int __nfs_readpages_from_fscache(struct nfs_open_context *ctx, ret = fscache_read_or_alloc_pages(nfs_i_fscache(inode), mapping, pages, nr_pages, - nfs_readpage_from_fscache_complete, + NULL, ctx, mapping_gfp_mask(mapping)); if (*nr_pages < npages) @@ -483,30 +550,19 @@ int __nfs_readpages_from_fscache(struct nfs_open_context *ctx, } /* - * Store a newly fetched page in fscache - * - PG_fscache must be set on the page + * Store a newly fetched data in fscache */ -void __nfs_readpage_to_fscache(struct inode *inode, struct page *page, int sync) +void nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, + unsigned long bytes) { - int ret; + struct netfs_read_subrequest *subreq = hdr->fsc; - dfprintk(FSCACHE, - "NFS: readpage_to_fscache(fsc:%p/p:%p(i:%lx f:%lx)/%d)\n", - nfs_i_fscache(inode), page, page->index, page->flags, sync); - - ret = fscache_write_page(nfs_i_fscache(inode), page, - inode->i_size, GFP_KERNEL); - dfprintk(FSCACHE, - "NFS: readpage_to_fscache: p:%p(i:%lu f:%lx) ret %d\n", - page, page->index, page->flags, ret); - - if (ret != 0) { - fscache_uncache_page(nfs_i_fscache(inode), page); - nfs_inc_fscache_stats(inode, - NFSIOS_FSCACHE_PAGES_WRITTEN_FAIL); - nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_UNCACHED); - } else { - nfs_inc_fscache_stats(inode, - NFSIOS_FSCACHE_PAGES_WRITTEN_OK); + if (NFS_I(hdr->inode)->fscache && subreq) { + dfprintk(FSCACHE, + "NFS: read_completion_to_fscache(fsc:%p err:%d bytes:%lu subreq->len:%lu\n", + NFS_I(hdr->inode)->fscache, hdr->error, bytes, subreq->len); + __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); + netfs_subreq_terminated(subreq, hdr->error ?: bytes); + hdr->fsc = NULL; } } diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 6754c8607230..858f28b1ce03 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -95,13 +95,14 @@ struct nfs_fscache_inode_auxdata { extern void __nfs_fscache_invalidate_page(struct page *, struct inode *); extern int nfs_fscache_release_page(struct page *, gfp_t); - -extern int __nfs_readpage_from_fscache(struct nfs_open_context *, - struct inode *, struct page *); +extern int nfs_readpage_from_fscache(struct file *file, + struct page *page, + struct nfs_readdesc *desc); extern int __nfs_readpages_from_fscache(struct nfs_open_context *, struct inode *, struct address_space *, struct list_head *, unsigned *); -extern void __nfs_readpage_to_fscache(struct inode *, struct page *, int); +extern void nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, + unsigned long bytes); /* * wait for a page to complete writing to the cache @@ -125,18 +126,6 @@ static inline void nfs_fscache_invalidate_page(struct page *page, } /* - * Retrieve a page from an inode data storage object. - */ -static inline int nfs_readpage_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct page *page) -{ - if (NFS_I(inode)->fscache) - return __nfs_readpage_from_fscache(ctx, inode, page); - return -ENOBUFS; -} - -/* * Retrieve a set of pages from an inode data storage object. */ static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx, @@ -152,18 +141,6 @@ static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx, } /* - * Store a page newly fetched from the server in an inode data storage object - * in the cache. - */ -static inline void nfs_readpage_to_fscache(struct inode *inode, - struct page *page, - int sync) -{ - if (PageFsCache(page)) - __nfs_readpage_to_fscache(inode, page, sync); -} - -/* * Invalidate the contents of fscache for this inode. This will not sleep. */ static inline void nfs_fscache_invalidate(struct inode *inode) @@ -212,9 +189,9 @@ static inline void nfs_fscache_invalidate_page(struct page *page, static inline void nfs_fscache_wait_on_page_write(struct nfs_inode *nfsi, struct page *page) {} -static inline int nfs_readpage_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct page *page) +static inline int nfs_readpage_from_fscache(struct file *file, + struct page *page, + struct nfs_readdesc *desc) { return -ENOBUFS; } @@ -226,9 +203,8 @@ static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx, { return -ENOBUFS; } -static inline void nfs_readpage_to_fscache(struct inode *inode, - struct page *page, int sync) {} - +static inline void nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, + unsigned long bytes) {} static inline void nfs_fscache_invalidate(struct inode *inode) {} static inline void nfs_fscache_wait_on_invalidate(struct inode *inode) {} diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c index 78c9c4bdef2b..2e21e6c4023a 100644 --- a/fs/nfs/pagelist.c +++ b/fs/nfs/pagelist.c @@ -68,6 +68,7 @@ void nfs_pgheader_init(struct nfs_pageio_descriptor *desc, hdr->good_bytes = mirror->pg_count; hdr->io_completion = desc->pg_io_completion; hdr->dreq = desc->pg_dreq; + hdr->fsc = desc->pg_fsc; hdr->release = release; hdr->completion_ops = desc->pg_completion_ops; if (hdr->completion_ops->init_hdr) @@ -849,6 +850,7 @@ void nfs_pageio_init(struct nfs_pageio_descriptor *desc, desc->pg_lseg = NULL; desc->pg_io_completion = NULL; desc->pg_dreq = NULL; + desc->pg_fsc = NULL; desc->pg_bsize = bsize; desc->pg_mirror_count = 1; diff --git a/fs/nfs/read.c b/fs/nfs/read.c index 9618abf01136..b47e4f38539b 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -124,10 +124,11 @@ static void nfs_readpage_release(struct nfs_page *req, int error) struct address_space *mapping = page_file_mapping(page); if (PageUptodate(page)) - nfs_readpage_to_fscache(inode, page, 0); + ; /* FIXME: review fscache page error handling */ else if (!PageError(page) && !PagePrivate(page)) generic_error_remove_page(mapping, page); - unlock_page(page); + if (!nfs_i_fscache(inode)) + unlock_page(page); } nfs_release_request(req); } @@ -181,6 +182,8 @@ static void nfs_read_completion(struct nfs_pgio_header *hdr) nfs_list_remove_request(req); nfs_readpage_release(req, error); } + /* FIXME: NFS_IOHDR_ERROR and NFS_IOHDR_EOF handled per-page */ + nfs_read_completion_to_fscache(hdr, bytes); out: hdr->release(hdr); } @@ -359,7 +362,7 @@ int nfs_readpage(struct file *file, struct page *page) desc.ctx = get_nfs_open_context(nfs_file_open_context(file)); if (!IS_SYNC(inode)) { - ret = nfs_readpage_from_fscache(desc.ctx, inode, page); + ret = nfs_readpage_from_fscache(file, page, &desc); if (ret == 0) goto out; } diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h index f0373a6cb5fb..b45570bcde91 100644 --- a/include/linux/nfs_page.h +++ b/include/linux/nfs_page.h @@ -101,6 +101,7 @@ struct nfs_pageio_descriptor { struct pnfs_layout_segment *pg_lseg; struct nfs_io_completion *pg_io_completion; struct nfs_direct_req *pg_dreq; + void *pg_fsc; unsigned int pg_bsize; /* default bsize for mirrors */ u32 pg_mirror_count; diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h index 3327239fa2f9..95423d3d9d98 100644 --- a/include/linux/nfs_xdr.h +++ b/include/linux/nfs_xdr.h @@ -1607,6 +1607,7 @@ struct nfs_pgio_header { const struct nfs_rw_ops *rw_ops; struct nfs_io_completion *io_completion; struct nfs_direct_req *dreq; + void *fsc; int pnfs_error; int error; /* merge with pnfs_error */ From patchwork Wed Jan 27 08:03:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Wysochanski X-Patchwork-Id: 12049225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8F243C433E0 for ; Wed, 27 Jan 2021 08:08:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4EA6820724 for ; Wed, 27 Jan 2021 08:08:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231596AbhA0IHy (ORCPT ); Wed, 27 Jan 2021 03:07:54 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:32668 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233251AbhA0IFG (ORCPT ); Wed, 27 Jan 2021 03:05:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611734618; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=aI2TDwdVfTv3v88QZGZF9mT9GyXjf3ibMfEDal66bTo=; b=L+bKEfCzvYYiKEHuvG1BSsOG7r6swPNU5+z86WlSWX/BjI0VocKiZSoqsZMoIgiPhmhhSW 1vbOzQl62HztXX0ZdAXet16X3nrTPtJa5l/16BZa5ULCzjosb3toVtVzhjKAbeN8Y94hs4 wb3O/gdNTC4CAGJBp8hKzLGlLcUHVFY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-558-OBL8t7tbOWalKP0pjZMCaw-1; Wed, 27 Jan 2021 03:03:34 -0500 X-MC-Unique: OBL8t7tbOWalKP0pjZMCaw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 257BB9CDA0; Wed, 27 Jan 2021 08:03:33 +0000 (UTC) Received: from dwysocha.rdu.csb (ovpn-112-111.rdu2.redhat.com [10.10.112.111]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AB1451F0; Wed, 27 Jan 2021 08:03:32 +0000 (UTC) From: Dave Wysochanski To: Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org Subject: [PATCH 8/8] NFS: Convert readpages to readahead and use netfs_readahead for fscache Date: Wed, 27 Jan 2021 03:03:17 -0500 Message-Id: <1611734597-14754-9-git-send-email-dwysocha@redhat.com> In-Reply-To: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> References: <1611734597-14754-2-git-send-email-dwysocha@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org The new FS-Cache API does not have a readpages equivalent function, and instead of fscache_read_or_alloc_pages() it implements a readahead function, netfs_readahead(). Call netfs_readahead() if fscache is enabled, and if not, utilize readahead_page() to run through the pages needed calling readpage_async_filler(). If we get an error on any page, then exit the loop, which matches the behavior of previously called read_cache_pages() when 'filler' returns an error. Signed-off-by: Dave Wysochanski --- fs/nfs/file.c | 2 +- fs/nfs/fscache.c | 50 ++++++++++------------------------------------ fs/nfs/fscache.h | 28 ++++---------------------- fs/nfs/read.c | 36 ++++++++++++++++----------------- include/linux/nfs_fs.h | 3 +-- include/linux/nfs_iostat.h | 2 +- 6 files changed, 35 insertions(+), 86 deletions(-) diff --git a/fs/nfs/file.c b/fs/nfs/file.c index 63940a7a70be..ebcaa164db5f 100644 --- a/fs/nfs/file.c +++ b/fs/nfs/file.c @@ -515,7 +515,7 @@ static void nfs_swap_deactivate(struct file *file) const struct address_space_operations nfs_file_aops = { .readpage = nfs_readpage, - .readpages = nfs_readpages, + .readahead = nfs_readahead, .set_page_dirty = __set_page_dirty_nobuffers, .writepage = nfs_writepage, .writepages = nfs_writepages, diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c index fede075209f5..65fb9065a70c 100644 --- a/fs/nfs/fscache.c +++ b/fs/nfs/fscache.c @@ -502,51 +502,21 @@ int nfs_readpage_from_fscache(struct file *file, /* * Retrieve a set of pages from fscache */ -int __nfs_readpages_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct address_space *mapping, - struct list_head *pages, - unsigned *nr_pages) +int nfs_readahead_from_fscache(struct nfs_readdesc *desc, + struct readahead_control *ractl) { - unsigned npages = *nr_pages; - int ret; + struct inode *inode = ractl->mapping->host; - dfprintk(FSCACHE, "NFS: nfs_getpages_from_fscache (0x%p/%u/0x%p)\n", - nfs_i_fscache(inode), npages, inode); - - ret = fscache_read_or_alloc_pages(nfs_i_fscache(inode), - mapping, pages, nr_pages, - NULL, - ctx, - mapping_gfp_mask(mapping)); - if (*nr_pages < npages) - nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_OK, - npages); - if (*nr_pages > 0) - nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL, - *nr_pages); + if (!NFS_I(ractl->mapping->host)->fscache) + return -ENOBUFS; - switch (ret) { - case 0: /* read submitted to the cache for all pages */ - BUG_ON(!list_empty(pages)); - BUG_ON(*nr_pages != 0); - dfprintk(FSCACHE, - "NFS: nfs_getpages_from_fscache: submitted\n"); + dfprintk(FSCACHE, "NFS: nfs_readahead_from_fscache (0x%p/%u/0x%p)\n", + nfs_i_fscache(inode), readahead_count(ractl), inode); - return ret; + netfs_readahead(ractl, &nfs_fscache_req_ops, desc); - case -ENOBUFS: /* some pages aren't cached and can't be */ - case -ENODATA: /* some pages aren't cached */ - dfprintk(FSCACHE, - "NFS: nfs_getpages_from_fscache: no page: %d\n", ret); - return 1; - - default: - dfprintk(FSCACHE, - "NFS: nfs_getpages_from_fscache: ret %d\n", ret); - } - - return ret; + /* FIXME: NFSIOS_NFSIOS_FSCACHE_ stats */ + return 0; } /* diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h index 858f28b1ce03..faccf4549d55 100644 --- a/fs/nfs/fscache.h +++ b/fs/nfs/fscache.h @@ -98,12 +98,10 @@ struct nfs_fscache_inode_auxdata { extern int nfs_readpage_from_fscache(struct file *file, struct page *page, struct nfs_readdesc *desc); -extern int __nfs_readpages_from_fscache(struct nfs_open_context *, - struct inode *, struct address_space *, - struct list_head *, unsigned *); +extern int nfs_readahead_from_fscache(struct nfs_readdesc *desc, + struct readahead_control *ractl); extern void nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr, unsigned long bytes); - /* * wait for a page to complete writing to the cache */ @@ -126,21 +124,6 @@ static inline void nfs_fscache_invalidate_page(struct page *page, } /* - * Retrieve a set of pages from an inode data storage object. - */ -static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct address_space *mapping, - struct list_head *pages, - unsigned *nr_pages) -{ - if (NFS_I(inode)->fscache) - return __nfs_readpages_from_fscache(ctx, inode, mapping, pages, - nr_pages); - return -ENOBUFS; -} - -/* * Invalidate the contents of fscache for this inode. This will not sleep. */ static inline void nfs_fscache_invalidate(struct inode *inode) @@ -195,11 +178,8 @@ static inline int nfs_readpage_from_fscache(struct file *file, { return -ENOBUFS; } -static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx, - struct inode *inode, - struct address_space *mapping, - struct list_head *pages, - unsigned *nr_pages) +static inline int nfs_readahead_from_fscache(struct nfs_readdesc *desc, + struct readahead_control *ractl) { return -ENOBUFS; } diff --git a/fs/nfs/read.c b/fs/nfs/read.c index b47e4f38539b..8be4f179a371 100644 --- a/fs/nfs/read.c +++ b/fs/nfs/read.c @@ -390,50 +390,50 @@ int nfs_readpage(struct file *file, struct page *page) return ret; } -int nfs_readpages(struct file *file, struct address_space *mapping, - struct list_head *pages, unsigned nr_pages) +void nfs_readahead(struct readahead_control *ractl) { struct nfs_readdesc desc; - struct inode *inode = mapping->host; + struct inode *inode = ractl->mapping->host; + struct page *page; int ret; - dprintk("NFS: nfs_readpages (%s/%Lu %d)\n", - inode->i_sb->s_id, - (unsigned long long)NFS_FILEID(inode), - nr_pages); + dprintk("NFS: %s (%s/%llu %lld)\n", __func__, + inode->i_sb->s_id, + (unsigned long long)NFS_FILEID(inode), + readahead_length(ractl)); nfs_inc_stats(inode, NFSIOS_VFSREADPAGES); - ret = -ESTALE; if (NFS_STALE(inode)) - goto out; + return; - if (file == NULL) { - ret = -EBADF; + if (ractl->file == NULL) { desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ); if (desc.ctx == NULL) - goto out; + return; } else - desc.ctx = get_nfs_open_context(nfs_file_open_context(file)); + desc.ctx = get_nfs_open_context(nfs_file_open_context(ractl->file)); /* attempt to read as many of the pages as possible from the cache * - this returns -ENOBUFS immediately if the cookie is negative */ - ret = nfs_readpages_from_fscache(desc.ctx, inode, mapping, - pages, &nr_pages); + ret = nfs_readahead_from_fscache(&desc, ractl); if (ret == 0) goto read_complete; /* all pages were read */ nfs_pageio_init_read(&desc.pgio, inode, false, &nfs_async_read_completion_ops); - ret = read_cache_pages(mapping, pages, readpage_async_filler, &desc); + while ((page = readahead_page(ractl))) { + ret = readpage_async_filler(&desc, page); + put_page(page); + if (unlikely(ret)) + break; + } nfs_pageio_complete_read(&desc.pgio, inode); read_complete: put_nfs_open_context(desc.ctx); -out: - return ret; } int __init nfs_init_readpagecache(void) diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h index 3cfcf219e96b..968c79b1b09b 100644 --- a/include/linux/nfs_fs.h +++ b/include/linux/nfs_fs.h @@ -568,8 +568,7 @@ extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, s * linux/fs/nfs/read.c */ extern int nfs_readpage(struct file *, struct page *); -extern int nfs_readpages(struct file *, struct address_space *, - struct list_head *, unsigned); +extern void nfs_readahead(struct readahead_control *rac); /* * inline functions diff --git a/include/linux/nfs_iostat.h b/include/linux/nfs_iostat.h index 027874c36c88..8baf8fb7551d 100644 --- a/include/linux/nfs_iostat.h +++ b/include/linux/nfs_iostat.h @@ -53,7 +53,7 @@ * NFS page counters * * These count the number of pages read or written via nfs_readpage(), - * nfs_readpages(), or their write equivalents. + * nfs_readahead(), or their write equivalents. * * NB: When adding new byte counters, please include the measured * units in the name of each byte counter to help users of this