From patchwork Mon Dec 16 15:53:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13909933 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05A281B4124; Mon, 16 Dec 2024 15:54:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734364454; cv=none; b=TfrN2r+W000G/MN6/ZMJjEYvzW3iveSRctF+xCS/RI46ezg6PwD9vouGdlrqOa5856PjM8KEeneX6YbbiaOK5QVLfD3j5WpMpaLf8MzGeuJZjpM7OwheIS+zshord6ZHDFPAHG7m0qM6mnuC1BE6sbSQEYmZhnkCw2WiTJWHXak= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734364454; c=relaxed/simple; bh=mdpQ1Rr6Q4/C6X6rpYbc+vmlgVo8HaKT44pMGF+PkNY=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=nZNDn7Z3FsP9GvymE+qVbRyiXlKOGtoGQCQ3/rU4/EjEvswWu40qN31VyHbKWFqhfcTEFt0aThVqS0rGTZfK3FzrF4rSJ90PbNQKUk/iBegG/7gzIfWFPAziL9YErsYiS5oG5TIJ0CYmV8/0VrCzX5mwwehY+2v+m3EmenMGkWk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=QRspj2HJ; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QRspj2HJ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=rBBpd22PJzLLmPajy8p7zWHGVvG5LBG4Sj90qfpefx0=; b=QRspj2HJzQ59iy5Ci553SMSMuO OpXDIMj4ZSvlXoCYN03XnzN6ZFvokxsuE07B78w44YAdCG4uoQP7ne4uYQVtvJ6Fp0G+KH/MHLMSO r+WK18nJ4hPxUe34Xd3V0v2r60xi/SVkxxmoUec7Uztzmh2gZYsmXANof2/YTzIJz/kl4ispSox8d UVyk6ltKQrRQXBtak8bbmXoZsR22pyG3z4Nz6zMVkenVyxprmo2LWknrCbK155VRWNMKKDY5Lxtx1 Zmxy20RDozu4q2jQnlHlFmK0nu7jdbAndAmNFX1PjXK8nOd87wrH56Z0/hGw1c15b+4RY6yzPV2ye PLjdJjqQ==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tNDQA-000000002B3-0vvD; Mon, 16 Dec 2024 15:54:10 +0000 From: "Matthew Wilcox (Oracle)" To: Dan Williams Cc: "Matthew Wilcox (Oracle)" , Vishal Verma , Dave Jiang , nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/2] dax: Remove access to page->index Date: Mon, 16 Dec 2024 15:53:55 +0000 Message-ID: <20241216155408.8102-1-willy@infradead.org> X-Mailer: git-send-email 2.47.1 Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This looks like a complete mess (why are we setting page->index at page fault time?), but I no longer care about DAX, and there's no reason to let DAX hold us back from removing page->index. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jane Chu --- drivers/dax/device.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/dax/device.c b/drivers/dax/device.c index 6d74e62bbee0..bc871a34b9cd 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -89,14 +89,13 @@ static void dax_set_mapping(struct vm_fault *vmf, pfn_t pfn, ALIGN_DOWN(vmf->address, fault_size)); for (i = 0; i < nr_pages; i++) { - struct page *page = pfn_to_page(pfn_t_to_pfn(pfn) + i); + struct folio *folio = pfn_folio(pfn_t_to_pfn(pfn) + i); - page = compound_head(page); - if (page->mapping) + if (folio->mapping) continue; - page->mapping = filp->f_mapping; - page->index = pgoff + i; + folio->mapping = filp->f_mapping; + folio->index = pgoff + i; } } From patchwork Mon Dec 16 15:53:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13909934 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 05A831B87C4; Mon, 16 Dec 2024 15:54:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734364454; cv=none; b=M21vgyd+xVqjAa0dqOUe+nOuz1/6Zzdn9OPkPqO8vCURcmE9MV17vRohXYmKKesWc4vNQK057Pp7WCslAqkZEUPYjw711QRx2ZaRdbM0i0dmUPNfa3dVq9COsfsPdTpORaK0Lge9Sjdd/tFuNDRbsnBLRq+LWCx16ZVtyQR7IT0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734364454; c=relaxed/simple; bh=FrI9TvgaUznNWBxBsuyTskoDG48Y+j7W1cyFNQMFQ4s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LqU3BsmYqcKTC+RXkJ0cjv07YFSL+FvSDRCQwkvaGyOWVxVnnBYq7ZUIhvAiBwYCnVEaKZlipitHvFGs+XwfmTqO0Jm9wUZao+bMINc3sSdSv/UDNSMqFvvnQjp99T/vdyNpq5+TQmHk+PbwHFgEB1lUEAePx+GRvuFJOiyB8rE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=T7fJugSM; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="T7fJugSM" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=iSSfIRVYFh9OixVSiPRYAanTfhJia6O8dwZi1nuoKzs=; b=T7fJugSMgUXI/si204QfAuhmxI Q2WBIIzhXzLjb0y/BpZjDG4/CPsZ8+Wn9SVhkAIA0xF7NQMSr+LhbKq5r41nYiHK/f+Kw3iyFd2Xu +gUmCFPB7pp9HLnv9a+Kp9XUFP+DlOnnZHV185ggKJUdyzeGrvqwHUjQwUz/jGUCX/JYjGd6JOv4V IT9i7AyS+tEe+ZwzFcodpliUdsykJlRA8BNheJJP6f/vH+xXazJJeVUgHR8sDAYVuPKidB5HQHeur cc5CqjAtcbwVyBsFIsnIQKZWfxhHgK8fng7sLxDw7zDBTxoj8weqsBdFjn3832M5T2Sfe6klUGmvm qVMExFPA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tNDQB-000000002CK-0yMa; Mon, 16 Dec 2024 15:54:11 +0000 From: "Matthew Wilcox (Oracle)" To: Dan Williams Cc: "Matthew Wilcox (Oracle)" , Vishal Verma , Dave Jiang , nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/2] dax: Use folios more widely within DAX Date: Mon, 16 Dec 2024 15:53:56 +0000 Message-ID: <20241216155408.8102-2-willy@infradead.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241216155408.8102-1-willy@infradead.org> References: <20241216155408.8102-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Convert from pfn to folio instead of page and use those folios throughout to avoid accesses to page->index and page->mapping. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Jane Chu --- fs/dax.c | 53 +++++++++++++++++++++++++++-------------------------- 1 file changed, 27 insertions(+), 26 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 21b47402b3dc..972febc6fb9d 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -320,38 +320,39 @@ static unsigned long dax_end_pfn(void *entry) for (pfn = dax_to_pfn(entry); \ pfn < dax_end_pfn(entry); pfn++) -static inline bool dax_page_is_shared(struct page *page) +static inline bool dax_folio_is_shared(struct folio *folio) { - return page->mapping == PAGE_MAPPING_DAX_SHARED; + return folio->mapping == PAGE_MAPPING_DAX_SHARED; } /* - * Set the page->mapping with PAGE_MAPPING_DAX_SHARED flag, increase the + * Set the folio->mapping with PAGE_MAPPING_DAX_SHARED flag, increase the * refcount. */ -static inline void dax_page_share_get(struct page *page) +static inline void dax_folio_share_get(struct folio *folio) { - if (page->mapping != PAGE_MAPPING_DAX_SHARED) { + if (folio->mapping != PAGE_MAPPING_DAX_SHARED) { /* * Reset the index if the page was already mapped * regularly before. */ - if (page->mapping) - page->share = 1; - page->mapping = PAGE_MAPPING_DAX_SHARED; + if (folio->mapping) + folio->page.share = 1; + folio->mapping = PAGE_MAPPING_DAX_SHARED; } - page->share++; + folio->page.share++; } -static inline unsigned long dax_page_share_put(struct page *page) +static inline unsigned long dax_folio_share_put(struct folio *folio) { - return --page->share; + return --folio->page.share; } /* - * When it is called in dax_insert_entry(), the shared flag will indicate that - * whether this entry is shared by multiple files. If so, set the page->mapping - * PAGE_MAPPING_DAX_SHARED, and use page->share as refcount. + * When it is called in dax_insert_entry(), the shared flag will indicate + * that whether this entry is shared by multiple files. If so, set + * the folio->mapping PAGE_MAPPING_DAX_SHARED, and use page->share + * as refcount. */ static void dax_associate_entry(void *entry, struct address_space *mapping, struct vm_area_struct *vma, unsigned long address, bool shared) @@ -364,14 +365,14 @@ static void dax_associate_entry(void *entry, struct address_space *mapping, index = linear_page_index(vma, address & ~(size - 1)); for_each_mapped_pfn(entry, pfn) { - struct page *page = pfn_to_page(pfn); + struct folio *folio = pfn_folio(pfn); if (shared) { - dax_page_share_get(page); + dax_folio_share_get(folio); } else { - WARN_ON_ONCE(page->mapping); - page->mapping = mapping; - page->index = index + i++; + WARN_ON_ONCE(folio->mapping); + folio->mapping = mapping; + folio->index = index + i++; } } } @@ -385,17 +386,17 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping, return; for_each_mapped_pfn(entry, pfn) { - struct page *page = pfn_to_page(pfn); + struct folio *folio = pfn_folio(pfn); - WARN_ON_ONCE(trunc && page_ref_count(page) > 1); - if (dax_page_is_shared(page)) { + WARN_ON_ONCE(trunc && folio_ref_count(folio) > 1); + if (dax_folio_is_shared(folio)) { /* keep the shared flag if this page is still shared */ - if (dax_page_share_put(page) > 0) + if (dax_folio_share_put(folio) > 0) continue; } else - WARN_ON_ONCE(page->mapping && page->mapping != mapping); - page->mapping = NULL; - page->index = 0; + WARN_ON_ONCE(folio->mapping && folio->mapping != mapping); + folio->mapping = NULL; + folio->index = 0; } }