From patchwork Mon Apr 29 07:24:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13646406 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE2CA1400B for ; Mon, 29 Apr 2024 07:24:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714375479; cv=none; b=oypTWEarfs695o0YlA0rvABwW54+lNHBxqiZ18v+2vSgecuPRlypKS0VpnYuP9kJI6C4Cw4hxD+mvLdxlLHEdb+MesIhco4cWZUUugylqMPbjsiIMFYmICxtjWcYveUiH6vPDVxVN0Rwy7MorUhYgx+htsphjMDFk6Rx1fXM6os= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714375479; c=relaxed/simple; bh=HOn0hP7loLDuh/+awVrLeLlxQP2i9iVFIlgJ7HdvlVY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ooRht4DDJrGcp4tyTZ00sMQKwh97z4MDuszRp9wxmrx1NSVRum+kWXRWiGObXg1bKjb4VQUGmajwXKx3AOm/I0pEw75UqFTGJ8M+k2R3jnwyNuATS2XPf2gFTt3nJmiHwuggmvp6OjZTz7AGEDEW70iLrzXaNiH2tjRlX/lVhXc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VSZWN5QPRzvPrW; Mon, 29 Apr 2024 15:21:20 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id CBA76140429; Mon, 29 Apr 2024 15:24:28 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 29 Apr 2024 15:24:28 +0800 From: Kefeng Wang To: Andrew Morton CC: "Matthew Wilcox (Oracle)" , , , Kefeng Wang Subject: [PATCH rfc 1/4] mm: memory: add prepare_range_pte_entry() Date: Mon, 29 Apr 2024 15:24:14 +0800 Message-ID: <20240429072417.2146732-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240429072417.2146732-1-wangkefeng.wang@huawei.com> References: <20240429072417.2146732-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) This is prepare for a separate filemap_set_pte_range(), add a prepare_range_pte_entry(), no functional changes. Signed-off-by: Kefeng Wang --- include/linux/mm.h | 2 ++ mm/memory.c | 33 ++++++++++++++++++++++----------- 2 files changed, 24 insertions(+), 11 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9849dfda44d4..bcbeb8a4cd43 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1372,6 +1372,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) } vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); +pte_t prepare_range_pte_entry(struct vm_fault *vmf, bool write, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr); void set_pte_range(struct vm_fault *vmf, struct folio *folio, struct page *page, unsigned int nr, unsigned long addr); diff --git a/mm/memory.c b/mm/memory.c index 6647685fd3c4..ccbeb58fa136 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4652,19 +4652,11 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) } #endif -/** - * set_pte_range - Set a range of PTEs to point to pages in a folio. - * @vmf: Fault decription. - * @folio: The folio that contains @page. - * @page: The first page to create a PTE for. - * @nr: The number of PTEs to create. - * @addr: The first address to create a PTE for. - */ -void set_pte_range(struct vm_fault *vmf, struct folio *folio, - struct page *page, unsigned int nr, unsigned long addr) +pte_t prepare_range_pte_entry(struct vm_fault *vmf, bool write, + struct folio *folio, struct page *page, + unsigned int nr, unsigned long addr) { struct vm_area_struct *vma = vmf->vma; - bool write = vmf->flags & FAULT_FLAG_WRITE; bool prefault = in_range(vmf->address, addr, nr * PAGE_SIZE); pte_t entry; @@ -4680,6 +4672,25 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio, entry = maybe_mkwrite(pte_mkdirty(entry), vma); if (unlikely(vmf_orig_pte_uffd_wp(vmf))) entry = pte_mkuffd_wp(entry); + + return entry; +} + +/** + * set_pte_range - Set a range of PTEs to point to pages in a folio. + * @vmf: Fault description. + * @folio: The folio that contains @page. + * @page: The first page to create a PTE for. + * @nr: The number of PTEs to create. + * @addr: The first address to create a PTE for. + */ +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr) +{ + struct vm_area_struct *vma = vmf->vma; + bool write = vmf->flags & FAULT_FLAG_WRITE; + pte_t entry = prepare_range_pte_entry(vmf, write, folio, page, nr, addr); + /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { VM_BUG_ON_FOLIO(nr != 1, folio); From patchwork Mon Apr 29 07:24:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13646404 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE2F31401B for ; Mon, 29 Apr 2024 07:24:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714375478; cv=none; b=qDJdeuuREBjocsrjChAmjWdzpyd/fpGmnexDznL8LJBiI8GbFw84OlEGyc02DDhVEQeIujoBnRYNwt8OvxYEPm/tcIsW3d6mWHkRMVSglSvL9UmOwSRtEyUmjCqQo194UutmuZ14Zj9UM99atB/lWSyky6XNiCr3lAM7urO9184= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714375478; c=relaxed/simple; bh=18VB/N8axsGCCmU/KRHUYTvPolKhLcxUihjKri8uuCw=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Q3fLgfnEbOIFbd0pYdY8YuGKGCOGxzMb4/asg0SW5V6tDoa88SGA/mEAWAiDU87STu69nbGfuwQEkYa/HQdiwmZt6zGU6lqS+xSJH7NCoJ0U27IplLuY0bVp92IZXuMhDOKC2Nb873dUPDB5J9aFG+F3iZzZdfh1Vg98AQpzcMA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VSZWP0vXzzvQsT; Mon, 29 Apr 2024 15:21:21 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 316A7140444; Mon, 29 Apr 2024 15:24:29 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 29 Apr 2024 15:24:28 +0800 From: Kefeng Wang To: Andrew Morton CC: "Matthew Wilcox (Oracle)" , , , Kefeng Wang Subject: [PATCH rfc 2/4] mm: filemap: add filemap_set_pte_range() Date: Mon, 29 Apr 2024 15:24:15 +0800 Message-ID: <20240429072417.2146732-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240429072417.2146732-1-wangkefeng.wang@huawei.com> References: <20240429072417.2146732-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) Adding filemap_set_pte_range() independent of set_pte_range() to unify the rss and folio reference update for small folio and large folio, which also is prepare for the upcoming lruvec stat batch updating. Signed-off-by: Kefeng Wang --- mm/filemap.c | 31 ++++++++++++++++++++++--------- 1 file changed, 22 insertions(+), 9 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index ec273b00ce5f..7019692daddd 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3499,6 +3499,25 @@ static struct folio *next_uptodate_folio(struct xa_state *xas, return NULL; } +static void filemap_set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr, + unsigned long *rss) +{ + struct vm_area_struct *vma = vmf->vma; + pte_t entry; + + entry = prepare_range_pte_entry(vmf, false, folio, page, nr, addr); + + folio_add_file_rmap_ptes(folio, page, nr, vma); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); + + /* no need to invalidate: a not-present page won't be cached */ + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr); + + *rss += nr; + folio_ref_add(folio, nr); +} + /* * Map page range [start_page, start_page + nr_pages) of folio. * start_page is gotten from start by folio_page(folio, start) @@ -3539,9 +3558,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, continue; skip: if (count) { - set_pte_range(vmf, folio, page, count, addr); - *rss += count; - folio_ref_add(folio, count); + filemap_set_pte_range(vmf, folio, page, count, addr, rss); if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; } @@ -3554,9 +3571,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, } while (--nr_pages > 0); if (count) { - set_pte_range(vmf, folio, page, count, addr); - *rss += count; - folio_ref_add(folio, count); + filemap_set_pte_range(vmf, folio, page, count, addr, rss); if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; } @@ -3591,9 +3606,7 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf, if (vmf->address == addr) ret = VM_FAULT_NOPAGE; - set_pte_range(vmf, folio, page, 1, addr); - (*rss)++; - folio_ref_inc(folio); + filemap_set_pte_range(vmf, folio, page, 1, addr, rss); return ret; } From patchwork Mon Apr 29 07:24:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13646407 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1232014273 for ; Mon, 29 Apr 2024 07:24:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714375479; cv=none; b=S8VD3AWQYNjU2ctKhlN2JNNbqUHFgzZYEfk1Sz3iCkvithP+sjIec6Tp4SouUBvOn8r716u5Rc0OUBX1WAf8so2jfbrIXjJmZH8Ie9mUoLfQTSaSLPmKtdw928EagjLY11mR5g8ex2Q2WvrWqoBahJCluIQIO81xLrhkta0ID9M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714375479; c=relaxed/simple; bh=v+wo/cn7/FPM94fg4sk1+h8IdKtx3BdUA/5+ddcXHYo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=sWnFKg8Jv/rRbpcfIPlb+kat+J8IfeUO5WYMg5z3TEoKm9PHMwRSEybHjo8yA+2uZiH1rSFj79pQ9LiP2xGMDhBHikdNN5eikDQzfDduM7dHu7LDImGzv/AdNi/XGjGlL5bkWTZZUttTvqHeDK8F5wrCPz+oq+sD+81SQP2PUhE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VSZWP3rhQzvQsr; Mon, 29 Apr 2024 15:21:21 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 93C4918007D; Mon, 29 Apr 2024 15:24:29 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 29 Apr 2024 15:24:29 +0800 From: Kefeng Wang To: Andrew Morton CC: "Matthew Wilcox (Oracle)" , , , Kefeng Wang Subject: [PATCH rfc 3/4] mm: filemap: move __lruvec_stat_mod_folio() out of filemap_set_pte_range() Date: Mon, 29 Apr 2024 15:24:16 +0800 Message-ID: <20240429072417.2146732-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240429072417.2146732-1-wangkefeng.wang@huawei.com> References: <20240429072417.2146732-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) Adding __folio_add_file_rmap_ptes() which don't update lruvec stat, it is used in filemap_set_pte_range(), with it, lruvec stat updating is moved into the caller, no functional changes. Signed-off-by: Kefeng Wang --- include/linux/rmap.h | 2 ++ mm/filemap.c | 27 ++++++++++++++++++--------- mm/rmap.c | 16 ++++++++++++++++ 3 files changed, 36 insertions(+), 9 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 7229b9baf20d..43014ddd06f9 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -242,6 +242,8 @@ void folio_add_anon_rmap_pmd(struct folio *, struct page *, struct vm_area_struct *, unsigned long address, rmap_t flags); void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); +int __folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages, + struct vm_area_struct *); void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages, struct vm_area_struct *); #define folio_add_file_rmap_pte(folio, page, vma) \ diff --git a/mm/filemap.c b/mm/filemap.c index 7019692daddd..3966b6616d02 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3501,14 +3501,15 @@ static struct folio *next_uptodate_folio(struct xa_state *xas, static void filemap_set_pte_range(struct vm_fault *vmf, struct folio *folio, struct page *page, unsigned int nr, unsigned long addr, - unsigned long *rss) + unsigned long *rss, int *nr_mapped) { struct vm_area_struct *vma = vmf->vma; pte_t entry; entry = prepare_range_pte_entry(vmf, false, folio, page, nr, addr); - folio_add_file_rmap_ptes(folio, page, nr, vma); + *nr_mapped += __folio_add_file_rmap_ptes(folio, page, nr, vma); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); /* no need to invalidate: a not-present page won't be cached */ @@ -3525,7 +3526,8 @@ static void filemap_set_pte_range(struct vm_fault *vmf, struct folio *folio, static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, struct folio *folio, unsigned long start, unsigned long addr, unsigned int nr_pages, - unsigned long *rss, unsigned int *mmap_miss) + unsigned long *rss, int *nr_mapped, + unsigned int *mmap_miss) { vm_fault_t ret = 0; struct page *page = folio_page(folio, start); @@ -3558,7 +3560,8 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, continue; skip: if (count) { - filemap_set_pte_range(vmf, folio, page, count, addr, rss); + filemap_set_pte_range(vmf, folio, page, count, addr, + rss, nr_mapped); if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; } @@ -3571,7 +3574,8 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, } while (--nr_pages > 0); if (count) { - filemap_set_pte_range(vmf, folio, page, count, addr, rss); + filemap_set_pte_range(vmf, folio, page, count, addr, rss, + nr_mapped); if (in_range(vmf->address, addr, count * PAGE_SIZE)) ret = VM_FAULT_NOPAGE; } @@ -3583,7 +3587,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf, struct folio *folio, unsigned long addr, - unsigned long *rss, unsigned int *mmap_miss) + unsigned long *rss, int *nr_mapped, unsigned int *mmap_miss) { vm_fault_t ret = 0; struct page *page = &folio->page; @@ -3606,7 +3610,7 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf, if (vmf->address == addr) ret = VM_FAULT_NOPAGE; - filemap_set_pte_range(vmf, folio, page, 1, addr, rss); + filemap_set_pte_range(vmf, folio, page, 1, addr, rss, nr_mapped); return ret; } @@ -3646,6 +3650,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, folio_type = mm_counter_file(folio); do { unsigned long end; + int nr_mapped = 0; addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; @@ -3655,11 +3660,15 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, if (!folio_test_large(folio)) ret |= filemap_map_order0_folio(vmf, - folio, addr, &rss, &mmap_miss); + folio, addr, &rss, &nr_mapped, + &mmap_miss); else ret |= filemap_map_folio_range(vmf, folio, xas.xa_index - folio->index, addr, - nr_pages, &rss, &mmap_miss); + nr_pages, &rss, &nr_mapped, + &mmap_miss); + + __lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr_mapped); folio_unlock(folio); folio_put(folio); diff --git a/mm/rmap.c b/mm/rmap.c index 2608c40dffad..55face4024f2 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1452,6 +1452,22 @@ static __always_inline void __folio_add_file_rmap(struct folio *folio, mlock_vma_folio(folio, vma); } +int __folio_add_file_rmap_ptes(struct folio *folio, struct page *page, + int nr_pages, struct vm_area_struct *vma) +{ + int nr, nr_pmdmapped = 0; + + VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); + + nr = __folio_add_rmap(folio, page, nr_pages, RMAP_LEVEL_PTE, + &nr_pmdmapped); + + /* See comments in folio_add_anon_rmap_*() */ + if (!folio_test_large(folio)) + mlock_vma_folio(folio, vma); + + return nr; +} /** * folio_add_file_rmap_ptes - add PTE mappings to a page range of a folio * @folio: The folio to add the mappings to From patchwork Mon Apr 29 07:24:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13646403 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC2CA8472 for ; Mon, 29 Apr 2024 07:24:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714375476; cv=none; b=MuXZ/9DHJHua6Ww9/SRWPTHmNfp9XAHPeOOS2vygDhp7MXpCq/jSaBF0kU+yki1rvXCCydR6vCG2CFxd45sxPF9kt7Jx8UKR5damn09tjrDq5SNFrihc9HkQkXsp/sU7ldfkUogYiXkQ7hdl+Eugf/isD2Rmixs8xA0dDfPehuU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714375476; c=relaxed/simple; bh=2Yxdzpfe3AmZ9M/v8fUgfJj7hU18v8KZ0QnqcdasQfI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=GV8EvsxrENuNzI9E25nQUVmsSyFWw+iwVXOQ5giEfnqBy/YNgvks+tOEx1L15vkzRD8/qNkSbFDCjT+jjbME2MIS5Ax4Qdza7PzvdmW1TbrPHfuauQI5Btod9JGUO/2pciwfWr3ymzGVULsYoQ7a+DnsltdhfbJY1eLUDy73HyQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4VSZYl0X6Pzcb0R; Mon, 29 Apr 2024 15:23:23 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id E866E14035F; Mon, 29 Apr 2024 15:24:29 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Mon, 29 Apr 2024 15:24:29 +0800 From: Kefeng Wang To: Andrew Morton CC: "Matthew Wilcox (Oracle)" , , , Kefeng Wang Subject: [PATCH rfc 4/4] mm: filemap: try to batch lruvec stat updating Date: Mon, 29 Apr 2024 15:24:17 +0800 Message-ID: <20240429072417.2146732-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240429072417.2146732-1-wangkefeng.wang@huawei.com> References: <20240429072417.2146732-1-wangkefeng.wang@huawei.com> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) The filemap_map_pages() tries to map few pages(eg, 16 pages), but the lruvec stat updating is called on each mapping, since the updating is time-consuming, especially with memcg, so try to batch it when the memcg and pgdat are same during the mapping, if luckily, we could save most of time of lruvec stat updating, the lat_pagefault shows 3~4% improvement. Signed-off-by: Kefeng Wang --- mm/filemap.c | 33 ++++++++++++++++++++++++++++++--- 1 file changed, 30 insertions(+), 3 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 3966b6616d02..b27281707098 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3615,6 +3615,20 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf, return ret; } +static void filemap_lruvec_stat_update(struct mem_cgroup *memcg, + pg_data_t *pgdat, int nr) +{ + struct lruvec *lruvec; + + if (!memcg) { + __mod_node_page_state(pgdat, NR_FILE_MAPPED, nr); + return; + } + + lruvec = mem_cgroup_lruvec(memcg, pgdat); + __mod_lruvec_state(lruvec, NR_FILE_MAPPED, nr); +} + vm_fault_t filemap_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff) { @@ -3628,6 +3642,9 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, vm_fault_t ret = 0; unsigned long rss = 0; unsigned int nr_pages = 0, mmap_miss = 0, mmap_miss_saved, folio_type; + struct mem_cgroup *memcg, *memcg_cur; + pg_data_t *pgdat, *pgdat_cur; + int nr_mapped = 0; rcu_read_lock(); folio = next_uptodate_folio(&xas, mapping, end_pgoff); @@ -3648,9 +3665,20 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, } folio_type = mm_counter_file(folio); + memcg = folio_memcg(folio); + pgdat = folio_pgdat(folio); do { unsigned long end; - int nr_mapped = 0; + + memcg_cur = folio_memcg(folio); + pgdat_cur = folio_pgdat(folio); + + if (unlikely(memcg != memcg_cur || pgdat != pgdat_cur)) { + filemap_lruvec_stat_update(memcg, pgdat, nr_mapped); + nr_mapped = 0; + memcg = memcg_cur; + pgdat = pgdat_cur; + } addr += (xas.xa_index - last_pgoff) << PAGE_SHIFT; vmf->pte += xas.xa_index - last_pgoff; @@ -3668,11 +3696,10 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf, nr_pages, &rss, &nr_mapped, &mmap_miss); - __lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr_mapped); - folio_unlock(folio); folio_put(folio); } while ((folio = next_uptodate_folio(&xas, mapping, end_pgoff)) != NULL); + filemap_lruvec_stat_update(memcg, pgdat, nr_mapped); add_mm_counter(vma->vm_mm, folio_type, rss); pte_unmap_unlock(vmf->pte, vmf->ptl); out: