From patchwork Fri May 26 19:59:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 9751053 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id F3ED660249 for ; Fri, 26 May 2017 19:59:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E3C1B27EED for ; Fri, 26 May 2017 19:59:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D4D2327F4B; Fri, 26 May 2017 19:59:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 570CF27EED for ; Fri, 26 May 2017 19:59:40 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 6F47821A02924; Fri, 26 May 2017 12:59:40 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 16A6321952CDF for ; Fri, 26 May 2017 12:59:39 -0700 (PDT) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 May 2017 12:59:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.38,399,1491289200"; d="scan'208";a="266874511" Received: from theros.lm.intel.com ([10.232.112.77]) by fmsmga004.fm.intel.com with ESMTP; 26 May 2017 12:59:37 -0700 From: Ross Zwisler To: Andrew Morton , linux-kernel@vger.kernel.org Subject: [PATCH] dax: improve fix for colliding PMD & PTE entries Date: Fri, 26 May 2017 13:59:32 -0600 Message-Id: <20170526195932.32178-1-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: <20170522215749.23516-2-ross.zwisler@linux.intel.com> References: <20170522215749.23516-2-ross.zwisler@linux.intel.com> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jan Kara , Eryu Guan , "Darrick J. Wong" , Matthew Wilcox , stable@vger.kernel.org, linux-mm@kvack.org, Dave Hansen , Alexander Viro , linux-fsdevel@vger.kernel.org, Christoph Hellwig , "Kirill A . Shutemov" , linux-nvdimm@lists.01.org MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP This commit, which has not yet made it upstream but is in the -mm tree: dax: Fix race between colliding PMD & PTE entries fixed a pair of race conditions where racing DAX PTE and PMD faults could corrupt page tables. This fix had two shortcomings which are addressed by this patch: 1) In the PTE fault handler we only checked for a collision using pmd_devmap(). The pmd_devmap() check will trigger when we have raced with a PMD that has real DAX storage, but to account for the case where we collide with a huge zero page entry we also need to check for pmd_trans_huge(). 2) In the PMD fault handler we only continued with the fault if no PMD at all was present (pmd_none()). This is the case when we are faulting in a PMD for the first time, but there are two other cases to consider. The first is that we are servicing a write fault over a PMD huge zero page, which we detect with pmd_trans_huge(). The second is that we are servicing a write fault over a DAX PMD with real storage, which we address with pmd_devmap(). Fix both of these, and instead of manually triggering a fallback in the PMD collision case instead be consistent with the other collision detection code in the fault handlers and just retry. Signed-off-by: Ross Zwisler Cc: stable@vger.kernel.org Reviewed-by: Jan Kara --- For both the -mm tree and for stable, feel free to squash this with the original commit if you think that is appropriate. This has passed targeted testing and an xfstests run. --- fs/dax.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index fc62f36..2a6889b 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1160,7 +1160,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, * the PTE we need to set up. If so just return and the fault will be * retried. */ - if (pmd_devmap(*vmf->pmd)) { + if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) { vmf_ret = VM_FAULT_NOPAGE; goto unlock_entry; } @@ -1411,11 +1411,14 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, /* * It is possible, particularly with mixed reads & writes to private * mappings, that we have raced with a PTE fault that overlaps with - * the PMD we need to set up. If so we just fall back to a PTE fault - * ourselves. + * the PMD we need to set up. If so just return and the fault will be + * retried. */ - if (!pmd_none(*vmf->pmd)) + if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) && + !pmd_devmap(*vmf->pmd)) { + result = 0; goto unlock_entry; + } /* * Note that we don't use iomap_apply here. We aren't doing I/O, only