From patchwork Fri May 26 19:59:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ross Zwisler X-Patchwork-Id: 9751057 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BB6C060390 for ; Fri, 26 May 2017 20:00:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF5BA28462 for ; Fri, 26 May 2017 20:00:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC1D728475; Fri, 26 May 2017 20:00:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 59A312808F for ; Fri, 26 May 2017 20:00:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S937624AbdEZT7n (ORCPT ); Fri, 26 May 2017 15:59:43 -0400 Received: from mga06.intel.com ([134.134.136.31]:53447 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934332AbdEZT7k (ORCPT ); Fri, 26 May 2017 15:59:40 -0400 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP; 26 May 2017 12:59:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.38,399,1491289200"; d="scan'208";a="266874511" Received: from theros.lm.intel.com ([10.232.112.77]) by fmsmga004.fm.intel.com with ESMTP; 26 May 2017 12:59:37 -0700 From: Ross Zwisler To: Andrew Morton , linux-kernel@vger.kernel.org Cc: Ross Zwisler , "Darrick J. Wong" , Alexander Viro , Christoph Hellwig , Dan Williams , Dave Hansen , Jan Kara , Matthew Wilcox , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org, "Kirill A . Shutemov" , Pawel Lebioda , Dave Jiang , Xiong Zhou , Eryu Guan , stable@vger.kernel.org Subject: [PATCH] dax: improve fix for colliding PMD & PTE entries Date: Fri, 26 May 2017 13:59:32 -0600 Message-Id: <20170526195932.32178-1-ross.zwisler@linux.intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: <20170522215749.23516-2-ross.zwisler@linux.intel.com> References: <20170522215749.23516-2-ross.zwisler@linux.intel.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This commit, which has not yet made it upstream but is in the -mm tree: dax: Fix race between colliding PMD & PTE entries fixed a pair of race conditions where racing DAX PTE and PMD faults could corrupt page tables. This fix had two shortcomings which are addressed by this patch: 1) In the PTE fault handler we only checked for a collision using pmd_devmap(). The pmd_devmap() check will trigger when we have raced with a PMD that has real DAX storage, but to account for the case where we collide with a huge zero page entry we also need to check for pmd_trans_huge(). 2) In the PMD fault handler we only continued with the fault if no PMD at all was present (pmd_none()). This is the case when we are faulting in a PMD for the first time, but there are two other cases to consider. The first is that we are servicing a write fault over a PMD huge zero page, which we detect with pmd_trans_huge(). The second is that we are servicing a write fault over a DAX PMD with real storage, which we address with pmd_devmap(). Fix both of these, and instead of manually triggering a fallback in the PMD collision case instead be consistent with the other collision detection code in the fault handlers and just retry. Signed-off-by: Ross Zwisler Cc: stable@vger.kernel.org Reviewed-by: Jan Kara --- For both the -mm tree and for stable, feel free to squash this with the original commit if you think that is appropriate. This has passed targeted testing and an xfstests run. --- fs/dax.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index fc62f36..2a6889b 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1160,7 +1160,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, * the PTE we need to set up. If so just return and the fault will be * retried. */ - if (pmd_devmap(*vmf->pmd)) { + if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) { vmf_ret = VM_FAULT_NOPAGE; goto unlock_entry; } @@ -1411,11 +1411,14 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, /* * It is possible, particularly with mixed reads & writes to private * mappings, that we have raced with a PTE fault that overlaps with - * the PMD we need to set up. If so we just fall back to a PTE fault - * ourselves. + * the PMD we need to set up. If so just return and the fault will be + * retried. */ - if (!pmd_none(*vmf->pmd)) + if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) && + !pmd_devmap(*vmf->pmd)) { + result = 0; goto unlock_entry; + } /* * Note that we don't use iomap_apply here. We aren't doing I/O, only