From patchwork Wed Aug 2 15:14:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13338364 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 44700C001DF for ; Wed, 2 Aug 2023 15:15:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D145E2801AB; Wed, 2 Aug 2023 11:14:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CC34F2801AA; Wed, 2 Aug 2023 11:14:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B65A32801AB; Wed, 2 Aug 2023 11:14:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 971562801AA for ; Wed, 2 Aug 2023 11:14:24 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 37B961A0F1D for ; Wed, 2 Aug 2023 15:14:24 +0000 (UTC) X-FDA: 81079510848.24.4F8C04E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 58E551C0016 for ; Wed, 2 Aug 2023 15:14:22 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y8yeRgVx; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690989262; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T1BSFaeWVOMDVtOKIkiXYBpk8N9UvaWUSRzfgvCLVV0=; b=Tnihn/uT+Ild7iKWH0So/qXQWJ34hPrmjWOpTQOIXSd1i54R346BPLv9+IuaOOyHIoQvea xyH82AA1DMV2oxX5gq9mFhsQeNVPv6zunBsKmrB7zWjN1ZyW9u6GF8i1JkQ+mzEWYtymKH Pbj9MHI5pJgh5jgGyMjAm2cHAuFIRQ4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690989262; a=rsa-sha256; cv=none; b=4IQmfts+rR8zxOlsyxven8m0P7vsxYYZkh7QQh1hFXKd6RMfdyC/TS1eAKUOLVy6f/gBBz aMgwvte1Wx+me41gwdcIHDHikRBBBmDo4aefRIGnispmni9/lBt1itoxVpXBjKa+uV2tyP q4T8+TFlZ8NaW87xhMXpKGL3JuO0w6w= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Y8yeRgVx; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=T1BSFaeWVOMDVtOKIkiXYBpk8N9UvaWUSRzfgvCLVV0=; b=Y8yeRgVxKYHnNGwsluFZ1rPAT3 VMJV1ITjvfKzTieeS6eOjGcGTEOg0fC/aMR487Iu1dEMX9ttlckwuMArtCMAuIecbvIuoJ9aqIoas P7Ozy/1xX0QDMsE0N1SUfOm4Pw7TzoBgvBTuwPZjGDKH6mT88PJbCAe+V2vrlaMmxJUU6yj9Pwu5m ROJbBRB95rKqE2O5f33eSGHD83hYfJeagG9YIY8NAEBMpPmsbJVgYt690USeD09VCxpYweXuWG0MP Qdnv2s/KHSUkPRUQBLj8qWXXBelfdvlCW0wYxus4VjDIP7zJRkdr8IsPCgS+rpMLvsgP/s5veDxlP BzEB+uUw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYC-00FfmA-Be; Wed, 02 Aug 2023 15:14:12 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v6 37/38] filemap: Batch PTE mappings Date: Wed, 2 Aug 2023 16:14:05 +0100 Message-Id: <20230802151406.3735276-38-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 58E551C0016 X-Rspam-User: X-Stat-Signature: iykzk5c5ng9cjs5ffht4fb7w9ainhox3 X-Rspamd-Server: rspam03 X-HE-Tag: 1690989262-508623 X-HE-Meta: U2FsdGVkX1/iJpfh874BIXyp01us21rpXLT+j9FAfycd4qGFlG1UTH7rbOeui/ZmCuyK+Ga6ppF2YfXzrqvIQQvP/+qSc92BhRRlsXPMfon7yITxBJ87LUNWtRmMJa4T3HoFiqIMy13xM8QS0sDL1Nf+V2XT/qN9eCOOD19stpzwHuD3OWy51V7dwyo03Z60/0xTDblp6L1URboffrDc5TNjXoSVvdLg8345crTanG0Yi3V+VvDavZeJmoXZLRht9eRDhwISi7SnVH4Z/iLFVE6eIdkDA14FXUo3Fi1LXPkCBvF1CUYFyA7C+p+QYACqRJrDw/JRdyLLpjfEaiEvHvsRWVme4Xo/eHBQgRg0JTPAV7K2mvijv7/1F6E2n9HRblGrhYtHWYk+etUjMD7MndZvaOdeGRuFeBoWawrJG6PCmzyes1LlqhP4nAfb2+DSWB0eLplPMrv7QCD9GqNdfcOpAeAXMNXZ75OH7MOHlwaZWFYE+jaEJeAVrrUX3XQDNORkrIwRz59QEBrW3WodzAedunqV7JBbmNmgLEO1IAJ7jiVgW9gsIwlLj3Ap9Rr7NBMkFMhr3tL9iNc0rkGKVjWRsmE9mH9SLjw6T3XQ+L0rYx77EbSUJ2jhKynomGuwWMZupOCD3K1zhGu/I8K2kBJTSqWSQa/n+7Ltce4L08ffC48397lwcZZ511yJEJ9fV5nnJWNOHdJQJsSvrofqsgz+FlbuZ2a7RyAln7lTjQg9YpIjxVzXLF3zf9tjQHVhbEmMZk5UhyJUuKeqThgcYHMMakj4mu/c+aAh54V9xO0wP7rCuNvuNgSuxkzGslkfMAaEeADqz+BwE47hjjrLcSAKWZZ+5799deZ0gIbBHhlt9tmA6zJJ7kqt+QuJEJqAeF2HX9XxC5hyDhYBpiwcSgO0BL65L+S/rHj94LhuYRcUneGTeCzKnnrDalvpvf626KFgZveDxXD/d4JR1Ok XdcmVONv pTCRttH0yyI0ry2v2JxpsBdkUH5l+QcnMUYwSv1aTS8o09sn24VGewTkprIsdSkKHCiFjoypUTOSeVNLRHghO5AAlol53+d9uR9CYUgiMyta4Hgu/X3hWB61tPaKxnW2RzgwRPzRX9CuLMUXK+NUq3TntLtkCjmthNEXdbi1LR5ei1c9WgL3JeWfHX/gh9O+uo7TdDAA1UYj/L/vpQKk1wbKKJYvSJOmZC3Ax X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yin Fengwei Call set_pte_range() once per contiguous range of the folio instead of once per page. This batches the updates to mm counters and the rmap. With a will-it-scale.page_fault3 like app (change file write fault testing to read fault testing. Trying to upstream it to will-it-scale at [1]) got 15% performance gain on a 48C/96T Cascade Lake test box with 96 processes running against xfs. Perf data collected before/after the change: 18.73%--page_add_file_rmap | --11.60%--__mod_lruvec_page_state | |--7.40%--__mod_memcg_lruvec_state | | | --5.58%--cgroup_rstat_updated | --2.53%--__mod_lruvec_state | --1.48%--__mod_node_page_state 9.93%--page_add_file_rmap_range | --2.67%--__mod_lruvec_page_state | |--1.95%--__mod_memcg_lruvec_state | | | --1.57%--cgroup_rstat_updated | --0.61%--__mod_lruvec_state | --0.54%--__mod_node_page_state The running time of __mode_lruvec_page_state() is reduced about 9%. [1]: https://github.com/antonblanchard/will-it-scale/pull/37 Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 43 +++++++++++++++++++++++++++++-------------- 1 file changed, 29 insertions(+), 14 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 2e7050461a87..bf6219d9aaac 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3485,11 +3485,12 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, struct file *file = vma->vm_file; struct page *page = folio_page(folio, start); unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss); - unsigned int ref_count = 0, count = 0; + unsigned int count = 0; + pte_t *old_ptep = vmf->pte; do { - if (PageHWPoison(page)) - continue; + if (PageHWPoison(page + count)) + goto skip; if (mmap_miss > 0) mmap_miss--; @@ -3499,20 +3500,34 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, * handled in the specific fault path, and it'll prohibit the * fault-around logic. */ - if (!pte_none(*vmf->pte)) - continue; - - if (vmf->address == addr) - ret = VM_FAULT_NOPAGE; + if (!pte_none(vmf->pte[count])) + goto skip; - ref_count++; - set_pte_range(vmf, folio, page, 1, addr); - } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); + count++; + continue; +skip: + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + if (in_range(vmf->address, addr, count)) + ret = VM_FAULT_NOPAGE; + } - /* Restore the vmf->pte */ - vmf->pte -= nr_pages; + count++; + page += count; + vmf->pte += count; + addr += count * PAGE_SIZE; + count = 0; + } while (--nr_pages > 0); + + if (count) { + set_pte_range(vmf, folio, page, count, addr); + folio_ref_add(folio, count); + if (in_range(vmf->address, addr, count)) + ret = VM_FAULT_NOPAGE; + } - folio_ref_add(folio, ref_count); + vmf->pte = old_ptep; WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss); return ret;