From patchwork Wed Jun 27 21:22:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 10492697 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CB59A602B3 for ; Wed, 27 Jun 2018 21:22:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB3262A20F for ; Wed, 27 Jun 2018 21:22:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BEFEF2A226; Wed, 27 Jun 2018 21:22:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 71D942A1E5 for ; Wed, 27 Jun 2018 21:22:56 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 50567202E540A; Wed, 27 Jun 2018 14:22:56 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received-SPF: None (no SPF record) identity=mailfrom; client-ip=192.55.52.151; helo=mga17.intel.com; envelope-from=ross.zwisler@linux.intel.com; receiver=linux-nvdimm@lists.01.org Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 1B7B621B02845 for ; Wed, 27 Jun 2018 14:22:55 -0700 (PDT) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Jun 2018 14:22:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,280,1526367600"; d="scan'208";a="70535731" Received: from theros.lm.intel.com ([10.232.112.164]) by orsmga002.jf.intel.com with ESMTP; 27 Jun 2018 14:22:54 -0700 From: Ross Zwisler To: Jan Kara , Dan Williams , Dave Chinner , "Darrick J. Wong" , Christoph Hellwig , linux-nvdimm@lists.01.org, Jeff Moyer , linux-ext4@vger.kernel.org Subject: [PATCH v2 1/2] dax: dax_layout_busy_page() warn on !exceptional Date: Wed, 27 Jun 2018 15:22:51 -0600 Message-Id: <20180627212252.31032-2-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180627212252.31032-1-ross.zwisler@linux.intel.com> References: <20180627212252.31032-1-ross.zwisler@linux.intel.com> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP Inodes using DAX should only ever have exceptional entries in their page caches. Make this clear by warning if the iteration in dax_layout_busy_page() ever sees a non-exceptional entry, and by adding a comment for the pagevec_release() call which only deals with struct page pointers. Signed-off-by: Ross Zwisler Reviewed-by: Jan Kara --- fs/dax.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/fs/dax.c b/fs/dax.c index 641192808bb6..897b51e41d8f 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -566,7 +566,8 @@ struct page *dax_layout_busy_page(struct address_space *mapping) if (index >= end) break; - if (!radix_tree_exceptional_entry(pvec_ent)) + if (WARN_ON_ONCE( + !radix_tree_exceptional_entry(pvec_ent))) continue; xa_lock_irq(&mapping->i_pages); @@ -578,6 +579,13 @@ struct page *dax_layout_busy_page(struct address_space *mapping) if (page) break; } + + /* + * We don't expect normal struct page entries to exist in our + * tree, but we keep these pagevec calls so that this code is + * consistent with the common pattern for handling pagevecs + * throughout the kernel. + */ pagevec_remove_exceptionals(&pvec); pagevec_release(&pvec); index++;